00:00:00.001 Started by upstream project "autotest-nightly" build number 4337 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3700 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.154 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.155 The recommended git tool is: git 00:00:00.155 using credential 00000000-0000-0000-0000-000000000002 00:00:00.157 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.218 Fetching changes from the remote Git repository 00:00:00.222 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.286 Using shallow fetch with depth 1 00:00:00.286 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.286 > git --version # timeout=10 00:00:00.321 > git --version # 'git version 2.39.2' 00:00:00.321 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.346 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.346 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.387 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.398 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.409 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.409 > git config core.sparsecheckout # timeout=10 00:00:07.419 > git read-tree -mu HEAD # timeout=10 00:00:07.436 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.491 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.491 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.576 [Pipeline] Start of Pipeline 00:00:07.586 [Pipeline] library 00:00:07.587 Loading library shm_lib@master 00:00:07.587 Library shm_lib@master is cached. Copying from home. 00:00:07.602 [Pipeline] node 00:00:07.627 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.629 [Pipeline] { 00:00:07.639 [Pipeline] catchError 00:00:07.641 [Pipeline] { 00:00:07.652 [Pipeline] wrap 00:00:07.658 [Pipeline] { 00:00:07.665 [Pipeline] stage 00:00:07.667 [Pipeline] { (Prologue) 00:00:07.682 [Pipeline] echo 00:00:07.683 Node: VM-host-SM38 00:00:07.687 [Pipeline] cleanWs 00:00:07.696 [WS-CLEANUP] Deleting project workspace... 00:00:07.696 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.703 [WS-CLEANUP] done 00:00:07.922 [Pipeline] setCustomBuildProperty 00:00:07.993 [Pipeline] httpRequest 00:00:08.321 [Pipeline] echo 00:00:08.322 Sorcerer 10.211.164.20 is alive 00:00:08.330 [Pipeline] retry 00:00:08.332 [Pipeline] { 00:00:08.342 [Pipeline] httpRequest 00:00:08.347 HttpMethod: GET 00:00:08.347 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.348 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.369 Response Code: HTTP/1.1 200 OK 00:00:08.369 Success: Status code 200 is in the accepted range: 200,404 00:00:08.370 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:34.220 [Pipeline] } 00:00:34.237 [Pipeline] // retry 00:00:34.245 [Pipeline] sh 00:00:34.533 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:34.552 [Pipeline] httpRequest 00:00:34.953 [Pipeline] echo 00:00:34.955 Sorcerer 10.211.164.20 is alive 00:00:34.965 [Pipeline] retry 00:00:34.967 [Pipeline] { 00:00:34.981 [Pipeline] httpRequest 00:00:34.987 HttpMethod: GET 00:00:34.987 URL: http://10.211.164.20/packages/spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:00:34.988 Sending request to url: http://10.211.164.20/packages/spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:00:35.003 Response Code: HTTP/1.1 200 OK 00:00:35.003 Success: Status code 200 is in the accepted range: 200,404 00:00:35.004 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:01:33.896 [Pipeline] } 00:01:33.915 [Pipeline] // retry 00:01:33.923 [Pipeline] sh 00:01:34.202 + tar --no-same-owner -xf spdk_8d3947977640da882a3cdcc21a7575115b7e7787.tar.gz 00:01:37.494 [Pipeline] sh 00:01:37.772 + git -C spdk log --oneline -n5 00:01:37.772 8d3947977 spdk_dd: simplify `io_uring_peek_cqe` return code processing 00:01:37.772 77ee034c7 bdev/nvme: Add lock to unprotected operations around attach controller 00:01:37.772 48454bb28 bdev/nvme: Add lock to unprotected operations around detach controller 00:01:37.772 4b59d7893 bdev/nvme: Use nbdev always for local nvme_bdev pointer variables 00:01:37.772 e56f1618f lib/ftl: Add explicit support for write unit sizes of base device 00:01:37.791 [Pipeline] writeFile 00:01:37.806 [Pipeline] sh 00:01:38.230 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:38.240 [Pipeline] sh 00:01:38.516 + cat autorun-spdk.conf 00:01:38.516 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.516 SPDK_TEST_NVME=1 00:01:38.516 SPDK_TEST_FTL=1 00:01:38.516 SPDK_TEST_ISAL=1 00:01:38.516 SPDK_RUN_ASAN=1 00:01:38.516 SPDK_RUN_UBSAN=1 00:01:38.516 SPDK_TEST_XNVME=1 00:01:38.516 SPDK_TEST_NVME_FDP=1 00:01:38.516 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:38.521 RUN_NIGHTLY=1 00:01:38.524 [Pipeline] } 00:01:38.538 [Pipeline] // stage 00:01:38.553 [Pipeline] stage 00:01:38.555 [Pipeline] { (Run VM) 00:01:38.567 [Pipeline] sh 00:01:38.842 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:38.842 + echo 'Start stage prepare_nvme.sh' 00:01:38.842 Start stage prepare_nvme.sh 00:01:38.842 + [[ -n 5 ]] 00:01:38.842 + disk_prefix=ex5 00:01:38.843 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:38.843 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:38.843 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:38.843 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:38.843 ++ SPDK_TEST_NVME=1 00:01:38.843 ++ SPDK_TEST_FTL=1 00:01:38.843 ++ SPDK_TEST_ISAL=1 00:01:38.843 ++ SPDK_RUN_ASAN=1 00:01:38.843 ++ SPDK_RUN_UBSAN=1 00:01:38.843 ++ SPDK_TEST_XNVME=1 00:01:38.843 ++ SPDK_TEST_NVME_FDP=1 00:01:38.843 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:38.843 ++ RUN_NIGHTLY=1 00:01:38.843 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:38.843 + nvme_files=() 00:01:38.843 + declare -A nvme_files 00:01:38.843 + backend_dir=/var/lib/libvirt/images/backends 00:01:38.843 + nvme_files['nvme.img']=5G 00:01:38.843 + nvme_files['nvme-cmb.img']=5G 00:01:38.843 + nvme_files['nvme-multi0.img']=4G 00:01:38.843 + nvme_files['nvme-multi1.img']=4G 00:01:38.843 + nvme_files['nvme-multi2.img']=4G 00:01:38.843 + nvme_files['nvme-openstack.img']=8G 00:01:38.843 + nvme_files['nvme-zns.img']=5G 00:01:38.843 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:38.843 + (( SPDK_TEST_FTL == 1 )) 00:01:38.843 + nvme_files["nvme-ftl.img"]=6G 00:01:38.843 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:38.843 + nvme_files["nvme-fdp.img"]=1G 00:01:38.843 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:38.843 + for nvme in "${!nvme_files[@]}" 00:01:38.843 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:38.843 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:38.843 + for nvme in "${!nvme_files[@]}" 00:01:38.843 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-ftl.img -s 6G 00:01:38.843 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:38.843 + for nvme in "${!nvme_files[@]}" 00:01:38.843 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:38.843 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:38.843 + for nvme in "${!nvme_files[@]}" 00:01:38.843 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:38.843 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:38.843 + for nvme in "${!nvme_files[@]}" 00:01:38.843 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:38.843 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:38.843 + for nvme in "${!nvme_files[@]}" 00:01:38.843 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:38.843 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:38.843 + for nvme in "${!nvme_files[@]}" 00:01:38.843 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:38.843 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:39.099 + for nvme in "${!nvme_files[@]}" 00:01:39.099 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-fdp.img -s 1G 00:01:39.099 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:39.099 + for nvme in "${!nvme_files[@]}" 00:01:39.099 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:39.099 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:39.099 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:39.099 + echo 'End stage prepare_nvme.sh' 00:01:39.099 End stage prepare_nvme.sh 00:01:39.108 [Pipeline] sh 00:01:39.384 + DISTRO=fedora39 00:01:39.384 + CPUS=10 00:01:39.384 + RAM=12288 00:01:39.384 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:39.384 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex5-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:39.384 00:01:39.384 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:39.384 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:39.384 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:39.384 HELP=0 00:01:39.384 DRY_RUN=0 00:01:39.384 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,/var/lib/libvirt/images/backends/ex5-nvme-fdp.img, 00:01:39.384 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:39.384 NVME_AUTO_CREATE=0 00:01:39.384 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,, 00:01:39.384 NVME_CMB=,,,, 00:01:39.384 NVME_PMR=,,,, 00:01:39.384 NVME_ZNS=,,,, 00:01:39.384 NVME_MS=true,,,, 00:01:39.384 NVME_FDP=,,,on, 00:01:39.384 SPDK_VAGRANT_DISTRO=fedora39 00:01:39.384 SPDK_VAGRANT_VMCPU=10 00:01:39.384 SPDK_VAGRANT_VMRAM=12288 00:01:39.384 SPDK_VAGRANT_PROVIDER=libvirt 00:01:39.384 SPDK_VAGRANT_HTTP_PROXY= 00:01:39.384 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:39.384 SPDK_OPENSTACK_NETWORK=0 00:01:39.384 VAGRANT_PACKAGE_BOX=0 00:01:39.384 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:39.384 FORCE_DISTRO=true 00:01:39.384 VAGRANT_BOX_VERSION= 00:01:39.384 EXTRA_VAGRANTFILES= 00:01:39.384 NIC_MODEL=e1000 00:01:39.384 00:01:39.384 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:39.384 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:41.907 Bringing machine 'default' up with 'libvirt' provider... 00:01:42.165 ==> default: Creating image (snapshot of base box volume). 00:01:42.422 ==> default: Creating domain with the following settings... 00:01:42.422 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733417296_918ef504ef40c786e575 00:01:42.422 ==> default: -- Domain type: kvm 00:01:42.422 ==> default: -- Cpus: 10 00:01:42.422 ==> default: -- Feature: acpi 00:01:42.422 ==> default: -- Feature: apic 00:01:42.422 ==> default: -- Feature: pae 00:01:42.422 ==> default: -- Memory: 12288M 00:01:42.422 ==> default: -- Memory Backing: hugepages: 00:01:42.422 ==> default: -- Management MAC: 00:01:42.422 ==> default: -- Loader: 00:01:42.422 ==> default: -- Nvram: 00:01:42.422 ==> default: -- Base box: spdk/fedora39 00:01:42.422 ==> default: -- Storage pool: default 00:01:42.422 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733417296_918ef504ef40c786e575.img (20G) 00:01:42.422 ==> default: -- Volume Cache: default 00:01:42.422 ==> default: -- Kernel: 00:01:42.422 ==> default: -- Initrd: 00:01:42.422 ==> default: -- Graphics Type: vnc 00:01:42.422 ==> default: -- Graphics Port: -1 00:01:42.422 ==> default: -- Graphics IP: 127.0.0.1 00:01:42.422 ==> default: -- Graphics Password: Not defined 00:01:42.422 ==> default: -- Video Type: cirrus 00:01:42.422 ==> default: -- Video VRAM: 9216 00:01:42.422 ==> default: -- Sound Type: 00:01:42.423 ==> default: -- Keymap: en-us 00:01:42.423 ==> default: -- TPM Path: 00:01:42.423 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:42.423 ==> default: -- Command line args: 00:01:42.423 ==> default: -> value=-device, 00:01:42.423 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:42.423 ==> default: -> value=-drive, 00:01:42.423 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:42.423 ==> default: -> value=-device, 00:01:42.423 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:42.423 ==> default: -> value=-device, 00:01:42.423 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:42.423 ==> default: -> value=-drive, 00:01:42.423 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-1-drive0, 00:01:42.423 ==> default: -> value=-device, 00:01:42.423 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:42.423 ==> default: -> value=-device, 00:01:42.423 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:42.423 ==> default: -> value=-drive, 00:01:42.423 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:42.423 ==> default: -> value=-device, 00:01:42.423 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:42.423 ==> default: -> value=-drive, 00:01:42.423 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:42.423 ==> default: -> value=-device, 00:01:42.423 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:42.423 ==> default: -> value=-drive, 00:01:42.423 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:42.423 ==> default: -> value=-device, 00:01:42.423 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:42.423 ==> default: -> value=-device, 00:01:42.423 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:42.423 ==> default: -> value=-device, 00:01:42.423 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:42.423 ==> default: -> value=-drive, 00:01:42.423 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:42.423 ==> default: -> value=-device, 00:01:42.423 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:42.423 ==> default: Creating shared folders metadata... 00:01:42.423 ==> default: Starting domain. 00:01:44.321 ==> default: Waiting for domain to get an IP address... 00:02:02.395 ==> default: Waiting for SSH to become available... 00:02:02.395 ==> default: Configuring and enabling network interfaces... 00:02:04.293 default: SSH address: 192.168.121.155:22 00:02:04.293 default: SSH username: vagrant 00:02:04.293 default: SSH auth method: private key 00:02:06.823 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:14.957 ==> default: Mounting SSHFS shared folder... 00:02:15.902 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:15.902 ==> default: Checking Mount.. 00:02:16.847 ==> default: Folder Successfully Mounted! 00:02:17.107 00:02:17.107 SUCCESS! 00:02:17.107 00:02:17.107 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:17.107 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:17.107 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:17.107 00:02:17.116 [Pipeline] } 00:02:17.128 [Pipeline] // stage 00:02:17.136 [Pipeline] dir 00:02:17.136 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:17.138 [Pipeline] { 00:02:17.149 [Pipeline] catchError 00:02:17.151 [Pipeline] { 00:02:17.162 [Pipeline] sh 00:02:17.445 + vagrant ssh-config --host vagrant 00:02:17.445 + sed -ne '/^Host/,$p' 00:02:17.445 + tee ssh_conf 00:02:20.746 Host vagrant 00:02:20.746 HostName 192.168.121.155 00:02:20.746 User vagrant 00:02:20.746 Port 22 00:02:20.746 UserKnownHostsFile /dev/null 00:02:20.746 StrictHostKeyChecking no 00:02:20.746 PasswordAuthentication no 00:02:20.746 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:20.746 IdentitiesOnly yes 00:02:20.746 LogLevel FATAL 00:02:20.746 ForwardAgent yes 00:02:20.746 ForwardX11 yes 00:02:20.746 00:02:20.762 [Pipeline] withEnv 00:02:20.764 [Pipeline] { 00:02:20.778 [Pipeline] sh 00:02:21.068 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:21.068 source /etc/os-release 00:02:21.068 [[ -e /image.version ]] && img=$(< /image.version) 00:02:21.068 # Minimal, systemd-like check. 00:02:21.068 if [[ -e /.dockerenv ]]; then 00:02:21.068 # Clear garbage from the node'\''s name: 00:02:21.068 # agt-er_autotest_547-896 -> autotest_547-896 00:02:21.068 # $HOSTNAME is the actual container id 00:02:21.068 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:21.068 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:21.068 # We can assume this is a mount from a host where container is running, 00:02:21.068 # so fetch its hostname to easily identify the target swarm worker. 00:02:21.068 container="$(< /etc/hostname) ($agent)" 00:02:21.068 else 00:02:21.068 # Fallback 00:02:21.068 container=$agent 00:02:21.068 fi 00:02:21.068 fi 00:02:21.068 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:21.068 ' 00:02:21.086 [Pipeline] } 00:02:21.101 [Pipeline] // withEnv 00:02:21.109 [Pipeline] setCustomBuildProperty 00:02:21.124 [Pipeline] stage 00:02:21.126 [Pipeline] { (Tests) 00:02:21.142 [Pipeline] sh 00:02:21.429 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:21.441 [Pipeline] sh 00:02:21.725 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:21.798 [Pipeline] timeout 00:02:21.798 Timeout set to expire in 50 min 00:02:21.801 [Pipeline] { 00:02:21.815 [Pipeline] sh 00:02:22.098 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:22.664 HEAD is now at 8d3947977 spdk_dd: simplify `io_uring_peek_cqe` return code processing 00:02:22.677 [Pipeline] sh 00:02:22.960 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:22.974 [Pipeline] sh 00:02:23.256 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:23.273 [Pipeline] sh 00:02:23.639 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:23.639 ++ readlink -f spdk_repo 00:02:23.639 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:23.639 + [[ -n /home/vagrant/spdk_repo ]] 00:02:23.639 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:23.639 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:23.639 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:23.639 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:23.639 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:23.639 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:23.639 + cd /home/vagrant/spdk_repo 00:02:23.639 + source /etc/os-release 00:02:23.639 ++ NAME='Fedora Linux' 00:02:23.639 ++ VERSION='39 (Cloud Edition)' 00:02:23.639 ++ ID=fedora 00:02:23.639 ++ VERSION_ID=39 00:02:23.639 ++ VERSION_CODENAME= 00:02:23.639 ++ PLATFORM_ID=platform:f39 00:02:23.639 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:23.639 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:23.639 ++ LOGO=fedora-logo-icon 00:02:23.639 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:23.639 ++ HOME_URL=https://fedoraproject.org/ 00:02:23.639 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:23.639 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:23.639 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:23.639 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:23.639 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:23.639 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:23.639 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:23.639 ++ SUPPORT_END=2024-11-12 00:02:23.639 ++ VARIANT='Cloud Edition' 00:02:23.639 ++ VARIANT_ID=cloud 00:02:23.639 + uname -a 00:02:23.639 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:23.639 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:24.210 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:24.210 Hugepages 00:02:24.210 node hugesize free / total 00:02:24.210 node0 1048576kB 0 / 0 00:02:24.210 node0 2048kB 0 / 0 00:02:24.210 00:02:24.210 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:24.210 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:24.210 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:24.482 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:24.482 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:24.482 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:24.482 + rm -f /tmp/spdk-ld-path 00:02:24.482 + source autorun-spdk.conf 00:02:24.482 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:24.482 ++ SPDK_TEST_NVME=1 00:02:24.482 ++ SPDK_TEST_FTL=1 00:02:24.482 ++ SPDK_TEST_ISAL=1 00:02:24.482 ++ SPDK_RUN_ASAN=1 00:02:24.482 ++ SPDK_RUN_UBSAN=1 00:02:24.482 ++ SPDK_TEST_XNVME=1 00:02:24.482 ++ SPDK_TEST_NVME_FDP=1 00:02:24.482 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:24.482 ++ RUN_NIGHTLY=1 00:02:24.482 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:24.482 + [[ -n '' ]] 00:02:24.482 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:24.482 + for M in /var/spdk/build-*-manifest.txt 00:02:24.482 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:24.482 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:24.482 + for M in /var/spdk/build-*-manifest.txt 00:02:24.482 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:24.482 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:24.482 + for M in /var/spdk/build-*-manifest.txt 00:02:24.482 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:24.482 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:24.482 ++ uname 00:02:24.482 + [[ Linux == \L\i\n\u\x ]] 00:02:24.482 + sudo dmesg -T 00:02:24.482 + sudo dmesg --clear 00:02:24.482 + dmesg_pid=5044 00:02:24.482 + [[ Fedora Linux == FreeBSD ]] 00:02:24.482 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:24.482 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:24.482 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:24.482 + sudo dmesg -Tw 00:02:24.482 + [[ -x /usr/src/fio-static/fio ]] 00:02:24.482 + export FIO_BIN=/usr/src/fio-static/fio 00:02:24.482 + FIO_BIN=/usr/src/fio-static/fio 00:02:24.482 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:24.482 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:24.482 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:24.483 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:24.483 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:24.483 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:24.483 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:24.483 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:24.483 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:24.483 16:48:58 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:24.483 16:48:58 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:24.483 16:48:58 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:24.483 16:48:58 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:24.483 16:48:58 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:24.483 16:48:58 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:24.483 16:48:58 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:24.483 16:48:58 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:24.483 16:48:58 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:24.483 16:48:58 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:24.483 16:48:58 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:24.483 16:48:58 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:02:24.483 16:48:58 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:24.483 16:48:58 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:24.483 16:48:58 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:24.483 16:48:58 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:24.483 16:48:58 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:24.483 16:48:58 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:24.483 16:48:58 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:24.483 16:48:58 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:24.483 16:48:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.483 16:48:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.483 16:48:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.483 16:48:58 -- paths/export.sh@5 -- $ export PATH 00:02:24.483 16:48:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:24.483 16:48:58 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:24.483 16:48:58 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:24.483 16:48:58 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733417338.XXXXXX 00:02:24.483 16:48:58 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733417338.QT00Wh 00:02:24.483 16:48:58 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:24.483 16:48:58 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:24.483 16:48:58 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:24.483 16:48:58 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:24.483 16:48:58 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:24.483 16:48:58 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:24.483 16:48:58 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:24.483 16:48:58 -- common/autotest_common.sh@10 -- $ set +x 00:02:24.483 16:48:58 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:24.483 16:48:58 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:24.483 16:48:58 -- pm/common@17 -- $ local monitor 00:02:24.483 16:48:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.483 16:48:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:24.483 16:48:58 -- pm/common@25 -- $ sleep 1 00:02:24.483 16:48:58 -- pm/common@21 -- $ date +%s 00:02:24.483 16:48:58 -- pm/common@21 -- $ date +%s 00:02:24.483 16:48:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733417338 00:02:24.483 16:48:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733417338 00:02:24.741 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733417338_collect-cpu-load.pm.log 00:02:24.741 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733417338_collect-vmstat.pm.log 00:02:25.679 16:48:59 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:25.679 16:48:59 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:25.679 16:48:59 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:25.679 16:48:59 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:25.679 16:48:59 -- spdk/autobuild.sh@16 -- $ date -u 00:02:25.679 Thu Dec 5 04:48:59 PM UTC 2024 00:02:25.679 16:48:59 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:25.679 v25.01-pre-296-g8d3947977 00:02:25.679 16:48:59 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:25.679 16:48:59 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:25.679 16:48:59 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:25.679 16:48:59 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:25.679 16:48:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:25.679 ************************************ 00:02:25.679 START TEST asan 00:02:25.679 ************************************ 00:02:25.679 using asan 00:02:25.679 16:48:59 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:25.679 00:02:25.679 real 0m0.000s 00:02:25.679 user 0m0.000s 00:02:25.679 sys 0m0.000s 00:02:25.679 16:48:59 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:25.679 ************************************ 00:02:25.679 END TEST asan 00:02:25.679 ************************************ 00:02:25.679 16:48:59 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:25.679 16:48:59 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:25.679 16:48:59 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:25.679 16:48:59 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:25.679 16:48:59 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:25.679 16:48:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:25.679 ************************************ 00:02:25.679 START TEST ubsan 00:02:25.679 ************************************ 00:02:25.679 using ubsan 00:02:25.679 16:48:59 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:25.679 00:02:25.679 real 0m0.000s 00:02:25.679 user 0m0.000s 00:02:25.679 sys 0m0.000s 00:02:25.679 16:48:59 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:25.679 ************************************ 00:02:25.679 16:48:59 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:25.679 END TEST ubsan 00:02:25.679 ************************************ 00:02:25.679 16:48:59 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:25.679 16:48:59 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:25.679 16:48:59 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:25.679 16:48:59 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:25.679 16:48:59 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:25.679 16:48:59 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:25.679 16:48:59 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:25.679 16:48:59 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:25.679 16:48:59 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:25.679 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:25.679 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:26.250 Using 'verbs' RDMA provider 00:02:36.832 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:46.797 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:46.797 Creating mk/config.mk...done. 00:02:46.797 Creating mk/cc.flags.mk...done. 00:02:46.797 Type 'make' to build. 00:02:46.797 16:49:21 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:46.797 16:49:21 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:46.797 16:49:21 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:46.797 16:49:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:46.797 ************************************ 00:02:46.797 START TEST make 00:02:46.797 ************************************ 00:02:46.797 16:49:21 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:47.054 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:47.054 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:47.054 meson setup builddir \ 00:02:47.054 -Dwith-libaio=enabled \ 00:02:47.054 -Dwith-liburing=enabled \ 00:02:47.054 -Dwith-libvfn=disabled \ 00:02:47.054 -Dwith-spdk=disabled \ 00:02:47.054 -Dexamples=false \ 00:02:47.054 -Dtests=false \ 00:02:47.054 -Dtools=false && \ 00:02:47.054 meson compile -C builddir && \ 00:02:47.054 cd -) 00:02:47.054 make[1]: Nothing to be done for 'all'. 00:02:48.955 The Meson build system 00:02:48.955 Version: 1.5.0 00:02:48.955 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:48.955 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:48.955 Build type: native build 00:02:48.955 Project name: xnvme 00:02:48.955 Project version: 0.7.5 00:02:48.955 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:48.955 C linker for the host machine: cc ld.bfd 2.40-14 00:02:48.955 Host machine cpu family: x86_64 00:02:48.955 Host machine cpu: x86_64 00:02:48.955 Message: host_machine.system: linux 00:02:48.955 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:48.955 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:48.955 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:48.955 Run-time dependency threads found: YES 00:02:48.955 Has header "setupapi.h" : NO 00:02:48.955 Has header "linux/blkzoned.h" : YES 00:02:48.955 Has header "linux/blkzoned.h" : YES (cached) 00:02:48.955 Has header "libaio.h" : YES 00:02:48.955 Library aio found: YES 00:02:48.955 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:48.955 Run-time dependency liburing found: YES 2.2 00:02:48.955 Dependency libvfn skipped: feature with-libvfn disabled 00:02:48.955 Found CMake: /usr/bin/cmake (3.27.7) 00:02:48.955 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:48.955 Subproject spdk : skipped: feature with-spdk disabled 00:02:48.955 Run-time dependency appleframeworks found: NO (tried framework) 00:02:48.955 Run-time dependency appleframeworks found: NO (tried framework) 00:02:48.955 Library rt found: YES 00:02:48.955 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:48.955 Configuring xnvme_config.h using configuration 00:02:48.955 Configuring xnvme.spec using configuration 00:02:48.955 Run-time dependency bash-completion found: YES 2.11 00:02:48.955 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:48.955 Program cp found: YES (/usr/bin/cp) 00:02:48.955 Build targets in project: 3 00:02:48.955 00:02:48.955 xnvme 0.7.5 00:02:48.955 00:02:48.955 Subprojects 00:02:48.955 spdk : NO Feature 'with-spdk' disabled 00:02:48.955 00:02:48.955 User defined options 00:02:48.955 examples : false 00:02:48.955 tests : false 00:02:48.955 tools : false 00:02:48.955 with-libaio : enabled 00:02:48.955 with-liburing: enabled 00:02:48.955 with-libvfn : disabled 00:02:48.955 with-spdk : disabled 00:02:48.955 00:02:48.955 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:48.955 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:48.955 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:49.213 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:49.213 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:49.213 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:49.213 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:49.213 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:49.213 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:49.213 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:49.213 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:49.213 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:49.213 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:49.213 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:49.213 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:49.213 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:49.213 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:49.213 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:49.213 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:49.213 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:49.213 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:49.213 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:49.213 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:49.213 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:49.213 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:49.213 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:49.213 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:49.471 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:49.471 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:49.471 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:49.471 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:49.471 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:49.471 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:49.471 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:49.471 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:49.471 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:49.471 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:49.471 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:49.471 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:49.471 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:49.471 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:49.471 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:49.471 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:49.471 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:49.471 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:49.471 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:49.471 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:49.471 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:49.471 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:49.471 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:49.471 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:49.471 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:49.471 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:49.471 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:49.471 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:49.471 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:49.471 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:49.471 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:49.471 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:49.471 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:49.471 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:49.471 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:49.729 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:49.729 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:49.729 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:49.729 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:49.729 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:49.729 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:49.729 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:49.729 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:49.729 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:49.729 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:49.729 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:49.729 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:49.729 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:49.986 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:49.986 [75/76] Linking static target lib/libxnvme.a 00:02:49.986 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:49.986 INFO: autodetecting backend as ninja 00:02:49.986 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:49.986 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:56.539 The Meson build system 00:02:56.539 Version: 1.5.0 00:02:56.539 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:56.539 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:56.539 Build type: native build 00:02:56.539 Program cat found: YES (/usr/bin/cat) 00:02:56.539 Project name: DPDK 00:02:56.539 Project version: 24.03.0 00:02:56.539 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:56.539 C linker for the host machine: cc ld.bfd 2.40-14 00:02:56.539 Host machine cpu family: x86_64 00:02:56.539 Host machine cpu: x86_64 00:02:56.539 Message: ## Building in Developer Mode ## 00:02:56.539 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:56.539 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:56.539 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:56.539 Program python3 found: YES (/usr/bin/python3) 00:02:56.539 Program cat found: YES (/usr/bin/cat) 00:02:56.539 Compiler for C supports arguments -march=native: YES 00:02:56.539 Checking for size of "void *" : 8 00:02:56.539 Checking for size of "void *" : 8 (cached) 00:02:56.539 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:56.539 Library m found: YES 00:02:56.539 Library numa found: YES 00:02:56.539 Has header "numaif.h" : YES 00:02:56.539 Library fdt found: NO 00:02:56.539 Library execinfo found: NO 00:02:56.539 Has header "execinfo.h" : YES 00:02:56.539 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:56.539 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:56.539 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:56.539 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:56.539 Run-time dependency openssl found: YES 3.1.1 00:02:56.539 Run-time dependency libpcap found: YES 1.10.4 00:02:56.539 Has header "pcap.h" with dependency libpcap: YES 00:02:56.539 Compiler for C supports arguments -Wcast-qual: YES 00:02:56.539 Compiler for C supports arguments -Wdeprecated: YES 00:02:56.539 Compiler for C supports arguments -Wformat: YES 00:02:56.539 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:56.539 Compiler for C supports arguments -Wformat-security: NO 00:02:56.539 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:56.539 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:56.539 Compiler for C supports arguments -Wnested-externs: YES 00:02:56.539 Compiler for C supports arguments -Wold-style-definition: YES 00:02:56.539 Compiler for C supports arguments -Wpointer-arith: YES 00:02:56.539 Compiler for C supports arguments -Wsign-compare: YES 00:02:56.539 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:56.539 Compiler for C supports arguments -Wundef: YES 00:02:56.539 Compiler for C supports arguments -Wwrite-strings: YES 00:02:56.539 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:56.539 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:56.539 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:56.539 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:56.539 Program objdump found: YES (/usr/bin/objdump) 00:02:56.539 Compiler for C supports arguments -mavx512f: YES 00:02:56.539 Checking if "AVX512 checking" compiles: YES 00:02:56.539 Fetching value of define "__SSE4_2__" : 1 00:02:56.539 Fetching value of define "__AES__" : 1 00:02:56.539 Fetching value of define "__AVX__" : 1 00:02:56.539 Fetching value of define "__AVX2__" : 1 00:02:56.539 Fetching value of define "__AVX512BW__" : 1 00:02:56.539 Fetching value of define "__AVX512CD__" : 1 00:02:56.539 Fetching value of define "__AVX512DQ__" : 1 00:02:56.539 Fetching value of define "__AVX512F__" : 1 00:02:56.539 Fetching value of define "__AVX512VL__" : 1 00:02:56.539 Fetching value of define "__PCLMUL__" : 1 00:02:56.539 Fetching value of define "__RDRND__" : 1 00:02:56.539 Fetching value of define "__RDSEED__" : 1 00:02:56.539 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:56.539 Fetching value of define "__znver1__" : (undefined) 00:02:56.539 Fetching value of define "__znver2__" : (undefined) 00:02:56.540 Fetching value of define "__znver3__" : (undefined) 00:02:56.540 Fetching value of define "__znver4__" : (undefined) 00:02:56.540 Library asan found: YES 00:02:56.540 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:56.540 Message: lib/log: Defining dependency "log" 00:02:56.540 Message: lib/kvargs: Defining dependency "kvargs" 00:02:56.540 Message: lib/telemetry: Defining dependency "telemetry" 00:02:56.540 Library rt found: YES 00:02:56.540 Checking for function "getentropy" : NO 00:02:56.540 Message: lib/eal: Defining dependency "eal" 00:02:56.540 Message: lib/ring: Defining dependency "ring" 00:02:56.540 Message: lib/rcu: Defining dependency "rcu" 00:02:56.540 Message: lib/mempool: Defining dependency "mempool" 00:02:56.540 Message: lib/mbuf: Defining dependency "mbuf" 00:02:56.540 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:56.540 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:56.540 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:56.540 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:56.540 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:56.540 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:56.540 Compiler for C supports arguments -mpclmul: YES 00:02:56.540 Compiler for C supports arguments -maes: YES 00:02:56.540 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:56.540 Compiler for C supports arguments -mavx512bw: YES 00:02:56.540 Compiler for C supports arguments -mavx512dq: YES 00:02:56.540 Compiler for C supports arguments -mavx512vl: YES 00:02:56.540 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:56.540 Compiler for C supports arguments -mavx2: YES 00:02:56.540 Compiler for C supports arguments -mavx: YES 00:02:56.540 Message: lib/net: Defining dependency "net" 00:02:56.540 Message: lib/meter: Defining dependency "meter" 00:02:56.540 Message: lib/ethdev: Defining dependency "ethdev" 00:02:56.540 Message: lib/pci: Defining dependency "pci" 00:02:56.540 Message: lib/cmdline: Defining dependency "cmdline" 00:02:56.540 Message: lib/hash: Defining dependency "hash" 00:02:56.540 Message: lib/timer: Defining dependency "timer" 00:02:56.540 Message: lib/compressdev: Defining dependency "compressdev" 00:02:56.540 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:56.540 Message: lib/dmadev: Defining dependency "dmadev" 00:02:56.540 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:56.540 Message: lib/power: Defining dependency "power" 00:02:56.540 Message: lib/reorder: Defining dependency "reorder" 00:02:56.540 Message: lib/security: Defining dependency "security" 00:02:56.540 Has header "linux/userfaultfd.h" : YES 00:02:56.540 Has header "linux/vduse.h" : YES 00:02:56.540 Message: lib/vhost: Defining dependency "vhost" 00:02:56.540 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:56.540 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:56.540 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:56.540 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:56.540 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:56.540 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:56.540 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:56.540 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:56.540 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:56.540 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:56.540 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:56.540 Configuring doxy-api-html.conf using configuration 00:02:56.540 Configuring doxy-api-man.conf using configuration 00:02:56.540 Program mandb found: YES (/usr/bin/mandb) 00:02:56.540 Program sphinx-build found: NO 00:02:56.540 Configuring rte_build_config.h using configuration 00:02:56.540 Message: 00:02:56.540 ================= 00:02:56.540 Applications Enabled 00:02:56.540 ================= 00:02:56.540 00:02:56.540 apps: 00:02:56.540 00:02:56.540 00:02:56.540 Message: 00:02:56.540 ================= 00:02:56.540 Libraries Enabled 00:02:56.540 ================= 00:02:56.540 00:02:56.540 libs: 00:02:56.540 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:56.540 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:56.540 cryptodev, dmadev, power, reorder, security, vhost, 00:02:56.540 00:02:56.540 Message: 00:02:56.540 =============== 00:02:56.540 Drivers Enabled 00:02:56.540 =============== 00:02:56.540 00:02:56.540 common: 00:02:56.540 00:02:56.540 bus: 00:02:56.540 pci, vdev, 00:02:56.540 mempool: 00:02:56.540 ring, 00:02:56.540 dma: 00:02:56.540 00:02:56.540 net: 00:02:56.540 00:02:56.540 crypto: 00:02:56.540 00:02:56.540 compress: 00:02:56.540 00:02:56.540 vdpa: 00:02:56.540 00:02:56.540 00:02:56.540 Message: 00:02:56.540 ================= 00:02:56.540 Content Skipped 00:02:56.540 ================= 00:02:56.540 00:02:56.540 apps: 00:02:56.540 dumpcap: explicitly disabled via build config 00:02:56.540 graph: explicitly disabled via build config 00:02:56.540 pdump: explicitly disabled via build config 00:02:56.540 proc-info: explicitly disabled via build config 00:02:56.540 test-acl: explicitly disabled via build config 00:02:56.540 test-bbdev: explicitly disabled via build config 00:02:56.540 test-cmdline: explicitly disabled via build config 00:02:56.540 test-compress-perf: explicitly disabled via build config 00:02:56.540 test-crypto-perf: explicitly disabled via build config 00:02:56.540 test-dma-perf: explicitly disabled via build config 00:02:56.540 test-eventdev: explicitly disabled via build config 00:02:56.540 test-fib: explicitly disabled via build config 00:02:56.540 test-flow-perf: explicitly disabled via build config 00:02:56.540 test-gpudev: explicitly disabled via build config 00:02:56.540 test-mldev: explicitly disabled via build config 00:02:56.540 test-pipeline: explicitly disabled via build config 00:02:56.540 test-pmd: explicitly disabled via build config 00:02:56.540 test-regex: explicitly disabled via build config 00:02:56.540 test-sad: explicitly disabled via build config 00:02:56.540 test-security-perf: explicitly disabled via build config 00:02:56.540 00:02:56.540 libs: 00:02:56.540 argparse: explicitly disabled via build config 00:02:56.540 metrics: explicitly disabled via build config 00:02:56.540 acl: explicitly disabled via build config 00:02:56.540 bbdev: explicitly disabled via build config 00:02:56.540 bitratestats: explicitly disabled via build config 00:02:56.540 bpf: explicitly disabled via build config 00:02:56.540 cfgfile: explicitly disabled via build config 00:02:56.540 distributor: explicitly disabled via build config 00:02:56.540 efd: explicitly disabled via build config 00:02:56.540 eventdev: explicitly disabled via build config 00:02:56.540 dispatcher: explicitly disabled via build config 00:02:56.540 gpudev: explicitly disabled via build config 00:02:56.540 gro: explicitly disabled via build config 00:02:56.540 gso: explicitly disabled via build config 00:02:56.540 ip_frag: explicitly disabled via build config 00:02:56.540 jobstats: explicitly disabled via build config 00:02:56.540 latencystats: explicitly disabled via build config 00:02:56.540 lpm: explicitly disabled via build config 00:02:56.540 member: explicitly disabled via build config 00:02:56.540 pcapng: explicitly disabled via build config 00:02:56.540 rawdev: explicitly disabled via build config 00:02:56.541 regexdev: explicitly disabled via build config 00:02:56.541 mldev: explicitly disabled via build config 00:02:56.541 rib: explicitly disabled via build config 00:02:56.541 sched: explicitly disabled via build config 00:02:56.541 stack: explicitly disabled via build config 00:02:56.541 ipsec: explicitly disabled via build config 00:02:56.541 pdcp: explicitly disabled via build config 00:02:56.541 fib: explicitly disabled via build config 00:02:56.541 port: explicitly disabled via build config 00:02:56.541 pdump: explicitly disabled via build config 00:02:56.541 table: explicitly disabled via build config 00:02:56.541 pipeline: explicitly disabled via build config 00:02:56.541 graph: explicitly disabled via build config 00:02:56.541 node: explicitly disabled via build config 00:02:56.541 00:02:56.541 drivers: 00:02:56.541 common/cpt: not in enabled drivers build config 00:02:56.541 common/dpaax: not in enabled drivers build config 00:02:56.541 common/iavf: not in enabled drivers build config 00:02:56.541 common/idpf: not in enabled drivers build config 00:02:56.541 common/ionic: not in enabled drivers build config 00:02:56.541 common/mvep: not in enabled drivers build config 00:02:56.541 common/octeontx: not in enabled drivers build config 00:02:56.541 bus/auxiliary: not in enabled drivers build config 00:02:56.541 bus/cdx: not in enabled drivers build config 00:02:56.541 bus/dpaa: not in enabled drivers build config 00:02:56.541 bus/fslmc: not in enabled drivers build config 00:02:56.541 bus/ifpga: not in enabled drivers build config 00:02:56.541 bus/platform: not in enabled drivers build config 00:02:56.541 bus/uacce: not in enabled drivers build config 00:02:56.541 bus/vmbus: not in enabled drivers build config 00:02:56.541 common/cnxk: not in enabled drivers build config 00:02:56.541 common/mlx5: not in enabled drivers build config 00:02:56.541 common/nfp: not in enabled drivers build config 00:02:56.541 common/nitrox: not in enabled drivers build config 00:02:56.541 common/qat: not in enabled drivers build config 00:02:56.541 common/sfc_efx: not in enabled drivers build config 00:02:56.541 mempool/bucket: not in enabled drivers build config 00:02:56.541 mempool/cnxk: not in enabled drivers build config 00:02:56.541 mempool/dpaa: not in enabled drivers build config 00:02:56.541 mempool/dpaa2: not in enabled drivers build config 00:02:56.541 mempool/octeontx: not in enabled drivers build config 00:02:56.541 mempool/stack: not in enabled drivers build config 00:02:56.541 dma/cnxk: not in enabled drivers build config 00:02:56.541 dma/dpaa: not in enabled drivers build config 00:02:56.541 dma/dpaa2: not in enabled drivers build config 00:02:56.541 dma/hisilicon: not in enabled drivers build config 00:02:56.541 dma/idxd: not in enabled drivers build config 00:02:56.541 dma/ioat: not in enabled drivers build config 00:02:56.541 dma/skeleton: not in enabled drivers build config 00:02:56.541 net/af_packet: not in enabled drivers build config 00:02:56.541 net/af_xdp: not in enabled drivers build config 00:02:56.541 net/ark: not in enabled drivers build config 00:02:56.541 net/atlantic: not in enabled drivers build config 00:02:56.541 net/avp: not in enabled drivers build config 00:02:56.541 net/axgbe: not in enabled drivers build config 00:02:56.541 net/bnx2x: not in enabled drivers build config 00:02:56.541 net/bnxt: not in enabled drivers build config 00:02:56.541 net/bonding: not in enabled drivers build config 00:02:56.541 net/cnxk: not in enabled drivers build config 00:02:56.541 net/cpfl: not in enabled drivers build config 00:02:56.541 net/cxgbe: not in enabled drivers build config 00:02:56.541 net/dpaa: not in enabled drivers build config 00:02:56.541 net/dpaa2: not in enabled drivers build config 00:02:56.541 net/e1000: not in enabled drivers build config 00:02:56.541 net/ena: not in enabled drivers build config 00:02:56.541 net/enetc: not in enabled drivers build config 00:02:56.541 net/enetfec: not in enabled drivers build config 00:02:56.541 net/enic: not in enabled drivers build config 00:02:56.541 net/failsafe: not in enabled drivers build config 00:02:56.541 net/fm10k: not in enabled drivers build config 00:02:56.541 net/gve: not in enabled drivers build config 00:02:56.541 net/hinic: not in enabled drivers build config 00:02:56.541 net/hns3: not in enabled drivers build config 00:02:56.541 net/i40e: not in enabled drivers build config 00:02:56.541 net/iavf: not in enabled drivers build config 00:02:56.541 net/ice: not in enabled drivers build config 00:02:56.541 net/idpf: not in enabled drivers build config 00:02:56.541 net/igc: not in enabled drivers build config 00:02:56.541 net/ionic: not in enabled drivers build config 00:02:56.541 net/ipn3ke: not in enabled drivers build config 00:02:56.541 net/ixgbe: not in enabled drivers build config 00:02:56.541 net/mana: not in enabled drivers build config 00:02:56.541 net/memif: not in enabled drivers build config 00:02:56.541 net/mlx4: not in enabled drivers build config 00:02:56.541 net/mlx5: not in enabled drivers build config 00:02:56.541 net/mvneta: not in enabled drivers build config 00:02:56.541 net/mvpp2: not in enabled drivers build config 00:02:56.541 net/netvsc: not in enabled drivers build config 00:02:56.541 net/nfb: not in enabled drivers build config 00:02:56.541 net/nfp: not in enabled drivers build config 00:02:56.541 net/ngbe: not in enabled drivers build config 00:02:56.541 net/null: not in enabled drivers build config 00:02:56.541 net/octeontx: not in enabled drivers build config 00:02:56.541 net/octeon_ep: not in enabled drivers build config 00:02:56.541 net/pcap: not in enabled drivers build config 00:02:56.541 net/pfe: not in enabled drivers build config 00:02:56.541 net/qede: not in enabled drivers build config 00:02:56.541 net/ring: not in enabled drivers build config 00:02:56.541 net/sfc: not in enabled drivers build config 00:02:56.541 net/softnic: not in enabled drivers build config 00:02:56.541 net/tap: not in enabled drivers build config 00:02:56.541 net/thunderx: not in enabled drivers build config 00:02:56.541 net/txgbe: not in enabled drivers build config 00:02:56.541 net/vdev_netvsc: not in enabled drivers build config 00:02:56.541 net/vhost: not in enabled drivers build config 00:02:56.541 net/virtio: not in enabled drivers build config 00:02:56.541 net/vmxnet3: not in enabled drivers build config 00:02:56.541 raw/*: missing internal dependency, "rawdev" 00:02:56.541 crypto/armv8: not in enabled drivers build config 00:02:56.541 crypto/bcmfs: not in enabled drivers build config 00:02:56.541 crypto/caam_jr: not in enabled drivers build config 00:02:56.541 crypto/ccp: not in enabled drivers build config 00:02:56.541 crypto/cnxk: not in enabled drivers build config 00:02:56.541 crypto/dpaa_sec: not in enabled drivers build config 00:02:56.541 crypto/dpaa2_sec: not in enabled drivers build config 00:02:56.541 crypto/ipsec_mb: not in enabled drivers build config 00:02:56.541 crypto/mlx5: not in enabled drivers build config 00:02:56.541 crypto/mvsam: not in enabled drivers build config 00:02:56.541 crypto/nitrox: not in enabled drivers build config 00:02:56.541 crypto/null: not in enabled drivers build config 00:02:56.541 crypto/octeontx: not in enabled drivers build config 00:02:56.541 crypto/openssl: not in enabled drivers build config 00:02:56.541 crypto/scheduler: not in enabled drivers build config 00:02:56.541 crypto/uadk: not in enabled drivers build config 00:02:56.541 crypto/virtio: not in enabled drivers build config 00:02:56.541 compress/isal: not in enabled drivers build config 00:02:56.541 compress/mlx5: not in enabled drivers build config 00:02:56.541 compress/nitrox: not in enabled drivers build config 00:02:56.542 compress/octeontx: not in enabled drivers build config 00:02:56.542 compress/zlib: not in enabled drivers build config 00:02:56.542 regex/*: missing internal dependency, "regexdev" 00:02:56.542 ml/*: missing internal dependency, "mldev" 00:02:56.542 vdpa/ifc: not in enabled drivers build config 00:02:56.542 vdpa/mlx5: not in enabled drivers build config 00:02:56.542 vdpa/nfp: not in enabled drivers build config 00:02:56.542 vdpa/sfc: not in enabled drivers build config 00:02:56.542 event/*: missing internal dependency, "eventdev" 00:02:56.542 baseband/*: missing internal dependency, "bbdev" 00:02:56.542 gpu/*: missing internal dependency, "gpudev" 00:02:56.542 00:02:56.542 00:02:56.800 Build targets in project: 84 00:02:56.800 00:02:56.800 DPDK 24.03.0 00:02:56.800 00:02:56.800 User defined options 00:02:56.800 buildtype : debug 00:02:56.800 default_library : shared 00:02:56.800 libdir : lib 00:02:56.800 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:56.800 b_sanitize : address 00:02:56.800 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:56.800 c_link_args : 00:02:56.800 cpu_instruction_set: native 00:02:56.800 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:56.800 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:56.800 enable_docs : false 00:02:56.800 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:56.800 enable_kmods : false 00:02:56.800 max_lcores : 128 00:02:56.800 tests : false 00:02:56.800 00:02:56.800 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:57.058 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:57.058 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:57.317 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:57.317 [3/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:57.317 [4/267] Linking static target lib/librte_kvargs.a 00:02:57.317 [5/267] Linking static target lib/librte_log.a 00:02:57.317 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:57.638 [7/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:57.638 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:57.638 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:57.638 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:57.638 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:57.638 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:57.638 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:57.638 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:57.638 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:57.638 [16/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.638 [17/267] Linking static target lib/librte_telemetry.a 00:02:57.638 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:57.896 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:57.896 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:57.896 [21/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.896 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:58.155 [23/267] Linking target lib/librte_log.so.24.1 00:02:58.155 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:58.155 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:58.155 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:58.155 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:58.155 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:58.155 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:58.155 [30/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:58.155 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:58.155 [32/267] Linking target lib/librte_kvargs.so.24.1 00:02:58.412 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:58.412 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:58.412 [35/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.412 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:58.412 [37/267] Linking target lib/librte_telemetry.so.24.1 00:02:58.412 [38/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:58.412 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:58.412 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:58.412 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:58.670 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:58.670 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:58.670 [44/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:58.670 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:58.670 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:58.671 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:58.671 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:58.671 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:58.929 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:58.929 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:58.929 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:58.929 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:58.929 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:58.929 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:59.187 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:59.187 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:59.187 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:59.187 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:59.187 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:59.187 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:59.187 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:59.187 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:59.445 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:59.445 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:59.445 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:59.445 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:59.703 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:59.703 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:59.703 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:59.703 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:59.703 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:59.703 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:59.703 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:59.703 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:59.703 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:59.703 [77/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:59.962 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:59.962 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:59.962 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:59.962 [81/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:59.962 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:59.962 [83/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:59.962 [84/267] Linking static target lib/librte_ring.a 00:03:00.220 [85/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:00.220 [86/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:00.220 [87/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:00.220 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:00.220 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:00.220 [90/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:00.478 [91/267] Linking static target lib/librte_eal.a 00:03:00.478 [92/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.478 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:00.478 [94/267] Linking static target lib/librte_mempool.a 00:03:00.478 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:00.478 [96/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:00.478 [97/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:00.478 [98/267] Linking static target lib/librte_rcu.a 00:03:00.478 [99/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:00.478 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:00.737 [101/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:00.737 [102/267] Linking static target lib/librte_mbuf.a 00:03:00.737 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:00.737 [104/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:00.737 [105/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:00.737 [106/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:00.737 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:00.737 [108/267] Linking static target lib/librte_net.a 00:03:00.737 [109/267] Linking static target lib/librte_meter.a 00:03:00.996 [110/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.996 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:00.996 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:00.996 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:00.996 [114/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.253 [115/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.253 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:01.253 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:01.253 [118/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.510 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:01.510 [120/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.510 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:01.767 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:01.767 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:01.767 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:01.767 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:01.767 [126/267] Linking static target lib/librte_pci.a 00:03:02.025 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:02.025 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:02.025 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:02.025 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:02.025 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:02.025 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:02.025 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:02.025 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:02.025 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:02.025 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:02.025 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:02.025 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:02.025 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:02.025 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:02.282 [141/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:02.282 [142/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.282 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:02.282 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:02.282 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:02.282 [146/267] Linking static target lib/librte_cmdline.a 00:03:02.282 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:02.282 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:02.540 [149/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:02.540 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:02.540 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:02.798 [152/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:02.798 [153/267] Linking static target lib/librte_timer.a 00:03:02.798 [154/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:02.798 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:02.798 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:02.798 [157/267] Linking static target lib/librte_ethdev.a 00:03:03.055 [158/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:03.055 [159/267] Linking static target lib/librte_compressdev.a 00:03:03.055 [160/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:03.055 [161/267] Linking static target lib/librte_hash.a 00:03:03.055 [162/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:03.055 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:03.055 [164/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:03.055 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:03.055 [166/267] Linking static target lib/librte_dmadev.a 00:03:03.055 [167/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.314 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:03.314 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:03.571 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:03.571 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:03.571 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.571 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:03.571 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.571 [175/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:03.571 [176/267] Linking static target lib/librte_cryptodev.a 00:03:03.571 [177/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.830 [178/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:03.830 [179/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:03.830 [180/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.830 [181/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:03.830 [182/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:03.830 [183/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:04.088 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:04.088 [185/267] Linking static target lib/librte_power.a 00:03:04.088 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:04.088 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:04.346 [188/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:04.346 [189/267] Linking static target lib/librte_reorder.a 00:03:04.346 [190/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:04.346 [191/267] Linking static target lib/librte_security.a 00:03:04.346 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:04.604 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:04.604 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.862 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.862 [196/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.862 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:04.862 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:04.862 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:05.120 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:05.120 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:05.120 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:05.120 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:05.120 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:05.120 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:05.378 [206/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.378 [207/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:05.378 [208/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:05.378 [209/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:05.635 [210/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:05.635 [211/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:05.635 [212/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:05.635 [213/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:05.635 [214/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:05.635 [215/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:05.635 [216/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:05.635 [217/267] Linking static target drivers/librte_bus_vdev.a 00:03:05.635 [218/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.635 [219/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.635 [220/267] Linking static target drivers/librte_bus_pci.a 00:03:05.902 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:05.902 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.902 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.902 [224/267] Linking static target drivers/librte_mempool_ring.a 00:03:05.902 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.159 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.416 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:07.348 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.348 [229/267] Linking target lib/librte_eal.so.24.1 00:03:07.605 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:07.605 [231/267] Linking target lib/librte_meter.so.24.1 00:03:07.605 [232/267] Linking target lib/librte_ring.so.24.1 00:03:07.605 [233/267] Linking target lib/librte_pci.so.24.1 00:03:07.605 [234/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:07.605 [235/267] Linking target lib/librte_timer.so.24.1 00:03:07.605 [236/267] Linking target lib/librte_dmadev.so.24.1 00:03:07.605 [237/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:07.605 [238/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:07.606 [239/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:07.606 [240/267] Linking target lib/librte_rcu.so.24.1 00:03:07.606 [241/267] Linking target lib/librte_mempool.so.24.1 00:03:07.606 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:07.606 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:07.864 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:07.864 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:07.864 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:07.864 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:07.864 [248/267] Linking target lib/librte_mbuf.so.24.1 00:03:07.864 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:07.864 [250/267] Linking target lib/librte_reorder.so.24.1 00:03:07.864 [251/267] Linking target lib/librte_net.so.24.1 00:03:07.864 [252/267] Linking target lib/librte_compressdev.so.24.1 00:03:07.864 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:03:08.121 [254/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.121 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:08.121 [256/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:08.121 [257/267] Linking target lib/librte_hash.so.24.1 00:03:08.121 [258/267] Linking target lib/librte_security.so.24.1 00:03:08.121 [259/267] Linking target lib/librte_cmdline.so.24.1 00:03:08.121 [260/267] Linking target lib/librte_ethdev.so.24.1 00:03:08.121 [261/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:08.121 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:08.385 [263/267] Linking target lib/librte_power.so.24.1 00:03:09.316 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:09.576 [265/267] Linking static target lib/librte_vhost.a 00:03:10.948 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.948 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:10.948 INFO: autodetecting backend as ninja 00:03:10.948 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:23.154 CC lib/ut_mock/mock.o 00:03:23.154 CC lib/log/log_deprecated.o 00:03:23.154 CC lib/log/log.o 00:03:23.155 CC lib/log/log_flags.o 00:03:23.155 CC lib/ut/ut.o 00:03:23.415 LIB libspdk_ut_mock.a 00:03:23.415 LIB libspdk_log.a 00:03:23.415 LIB libspdk_ut.a 00:03:23.415 SO libspdk_ut_mock.so.6.0 00:03:23.415 SO libspdk_ut.so.2.0 00:03:23.415 SO libspdk_log.so.7.1 00:03:23.415 SYMLINK libspdk_ut_mock.so 00:03:23.415 SYMLINK libspdk_ut.so 00:03:23.415 SYMLINK libspdk_log.so 00:03:23.674 CC lib/util/base64.o 00:03:23.674 CC lib/util/bit_array.o 00:03:23.674 CC lib/util/crc16.o 00:03:23.674 CC lib/util/crc32.o 00:03:23.674 CXX lib/trace_parser/trace.o 00:03:23.674 CC lib/util/crc32c.o 00:03:23.674 CC lib/util/cpuset.o 00:03:23.674 CC lib/dma/dma.o 00:03:23.674 CC lib/ioat/ioat.o 00:03:23.674 CC lib/vfio_user/host/vfio_user_pci.o 00:03:23.674 CC lib/util/crc32_ieee.o 00:03:23.674 CC lib/util/crc64.o 00:03:23.674 CC lib/util/dif.o 00:03:23.674 CC lib/util/fd.o 00:03:23.674 CC lib/util/fd_group.o 00:03:23.935 LIB libspdk_dma.a 00:03:23.935 SO libspdk_dma.so.5.0 00:03:23.935 CC lib/vfio_user/host/vfio_user.o 00:03:23.935 CC lib/util/file.o 00:03:23.935 CC lib/util/hexlify.o 00:03:23.935 CC lib/util/iov.o 00:03:23.935 SYMLINK libspdk_dma.so 00:03:23.935 CC lib/util/math.o 00:03:23.935 LIB libspdk_ioat.a 00:03:23.935 SO libspdk_ioat.so.7.0 00:03:23.935 CC lib/util/net.o 00:03:23.935 SYMLINK libspdk_ioat.so 00:03:23.935 CC lib/util/pipe.o 00:03:23.935 CC lib/util/strerror_tls.o 00:03:23.935 CC lib/util/string.o 00:03:23.935 CC lib/util/uuid.o 00:03:23.935 LIB libspdk_vfio_user.a 00:03:23.935 CC lib/util/xor.o 00:03:23.935 CC lib/util/zipf.o 00:03:23.935 SO libspdk_vfio_user.so.5.0 00:03:23.935 CC lib/util/md5.o 00:03:24.196 SYMLINK libspdk_vfio_user.so 00:03:24.457 LIB libspdk_util.a 00:03:24.457 SO libspdk_util.so.10.1 00:03:24.457 LIB libspdk_trace_parser.a 00:03:24.457 SO libspdk_trace_parser.so.6.0 00:03:24.457 SYMLINK libspdk_util.so 00:03:24.719 SYMLINK libspdk_trace_parser.so 00:03:24.719 CC lib/json/json_parse.o 00:03:24.719 CC lib/json/json_util.o 00:03:24.719 CC lib/json/json_write.o 00:03:24.719 CC lib/rdma_utils/rdma_utils.o 00:03:24.719 CC lib/idxd/idxd.o 00:03:24.719 CC lib/idxd/idxd_user.o 00:03:24.719 CC lib/idxd/idxd_kernel.o 00:03:24.719 CC lib/vmd/vmd.o 00:03:24.719 CC lib/conf/conf.o 00:03:24.719 CC lib/env_dpdk/env.o 00:03:24.719 CC lib/vmd/led.o 00:03:24.981 CC lib/env_dpdk/memory.o 00:03:24.981 LIB libspdk_conf.a 00:03:24.981 CC lib/env_dpdk/pci.o 00:03:24.981 SO libspdk_conf.so.6.0 00:03:24.981 LIB libspdk_rdma_utils.a 00:03:24.981 CC lib/env_dpdk/init.o 00:03:24.981 SO libspdk_rdma_utils.so.1.0 00:03:24.981 LIB libspdk_json.a 00:03:24.981 SYMLINK libspdk_conf.so 00:03:24.981 CC lib/env_dpdk/threads.o 00:03:24.981 CC lib/env_dpdk/pci_ioat.o 00:03:24.981 SO libspdk_json.so.6.0 00:03:24.981 SYMLINK libspdk_rdma_utils.so 00:03:24.981 CC lib/env_dpdk/pci_virtio.o 00:03:24.981 SYMLINK libspdk_json.so 00:03:24.981 CC lib/env_dpdk/pci_vmd.o 00:03:25.242 CC lib/env_dpdk/pci_idxd.o 00:03:25.242 CC lib/env_dpdk/pci_event.o 00:03:25.242 CC lib/rdma_provider/common.o 00:03:25.242 LIB libspdk_idxd.a 00:03:25.242 SO libspdk_idxd.so.12.1 00:03:25.242 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:25.242 CC lib/env_dpdk/sigbus_handler.o 00:03:25.242 CC lib/env_dpdk/pci_dpdk.o 00:03:25.242 SYMLINK libspdk_idxd.so 00:03:25.242 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:25.242 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:25.242 LIB libspdk_vmd.a 00:03:25.242 LIB libspdk_rdma_provider.a 00:03:25.242 SO libspdk_vmd.so.6.0 00:03:25.501 SO libspdk_rdma_provider.so.7.0 00:03:25.501 SYMLINK libspdk_vmd.so 00:03:25.501 SYMLINK libspdk_rdma_provider.so 00:03:25.501 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:25.501 CC lib/jsonrpc/jsonrpc_server.o 00:03:25.501 CC lib/jsonrpc/jsonrpc_client.o 00:03:25.501 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:25.763 LIB libspdk_jsonrpc.a 00:03:25.763 SO libspdk_jsonrpc.so.6.0 00:03:25.763 SYMLINK libspdk_jsonrpc.so 00:03:26.025 CC lib/rpc/rpc.o 00:03:26.025 LIB libspdk_env_dpdk.a 00:03:26.287 SO libspdk_env_dpdk.so.15.1 00:03:26.287 LIB libspdk_rpc.a 00:03:26.287 SO libspdk_rpc.so.6.0 00:03:26.287 SYMLINK libspdk_env_dpdk.so 00:03:26.287 SYMLINK libspdk_rpc.so 00:03:26.547 CC lib/trace/trace_flags.o 00:03:26.547 CC lib/trace/trace.o 00:03:26.547 CC lib/keyring/keyring_rpc.o 00:03:26.547 CC lib/trace/trace_rpc.o 00:03:26.547 CC lib/keyring/keyring.o 00:03:26.547 CC lib/notify/notify.o 00:03:26.547 CC lib/notify/notify_rpc.o 00:03:26.808 LIB libspdk_notify.a 00:03:26.808 SO libspdk_notify.so.6.0 00:03:26.808 SYMLINK libspdk_notify.so 00:03:26.808 LIB libspdk_keyring.a 00:03:26.808 LIB libspdk_trace.a 00:03:26.808 SO libspdk_keyring.so.2.0 00:03:26.808 SO libspdk_trace.so.11.0 00:03:26.808 SYMLINK libspdk_keyring.so 00:03:26.808 SYMLINK libspdk_trace.so 00:03:27.071 CC lib/sock/sock.o 00:03:27.071 CC lib/sock/sock_rpc.o 00:03:27.071 CC lib/thread/thread.o 00:03:27.071 CC lib/thread/iobuf.o 00:03:27.664 LIB libspdk_sock.a 00:03:27.664 SO libspdk_sock.so.10.0 00:03:27.664 SYMLINK libspdk_sock.so 00:03:27.925 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:27.925 CC lib/nvme/nvme_ctrlr.o 00:03:27.925 CC lib/nvme/nvme_fabric.o 00:03:27.925 CC lib/nvme/nvme_ns.o 00:03:27.925 CC lib/nvme/nvme_ns_cmd.o 00:03:27.925 CC lib/nvme/nvme_pcie_common.o 00:03:27.925 CC lib/nvme/nvme_pcie.o 00:03:27.925 CC lib/nvme/nvme.o 00:03:27.925 CC lib/nvme/nvme_qpair.o 00:03:28.497 CC lib/nvme/nvme_quirks.o 00:03:28.497 CC lib/nvme/nvme_transport.o 00:03:28.497 CC lib/nvme/nvme_discovery.o 00:03:28.758 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:28.758 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:28.758 CC lib/nvme/nvme_tcp.o 00:03:28.758 LIB libspdk_thread.a 00:03:28.758 CC lib/nvme/nvme_opal.o 00:03:28.758 SO libspdk_thread.so.11.0 00:03:28.758 SYMLINK libspdk_thread.so 00:03:28.758 CC lib/nvme/nvme_io_msg.o 00:03:29.018 CC lib/nvme/nvme_poll_group.o 00:03:29.018 CC lib/nvme/nvme_zns.o 00:03:29.018 CC lib/nvme/nvme_stubs.o 00:03:29.018 CC lib/nvme/nvme_auth.o 00:03:29.278 CC lib/nvme/nvme_cuse.o 00:03:29.278 CC lib/nvme/nvme_rdma.o 00:03:29.537 CC lib/accel/accel.o 00:03:29.537 CC lib/accel/accel_rpc.o 00:03:29.537 CC lib/blob/blobstore.o 00:03:29.537 CC lib/init/json_config.o 00:03:29.798 CC lib/virtio/virtio.o 00:03:29.798 CC lib/virtio/virtio_vhost_user.o 00:03:29.798 CC lib/init/subsystem.o 00:03:30.057 CC lib/virtio/virtio_vfio_user.o 00:03:30.057 CC lib/virtio/virtio_pci.o 00:03:30.057 CC lib/blob/request.o 00:03:30.057 CC lib/init/subsystem_rpc.o 00:03:30.057 CC lib/blob/zeroes.o 00:03:30.057 CC lib/blob/blob_bs_dev.o 00:03:30.057 CC lib/init/rpc.o 00:03:30.316 LIB libspdk_virtio.a 00:03:30.316 CC lib/accel/accel_sw.o 00:03:30.316 CC lib/fsdev/fsdev.o 00:03:30.316 CC lib/fsdev/fsdev_io.o 00:03:30.316 SO libspdk_virtio.so.7.0 00:03:30.316 LIB libspdk_init.a 00:03:30.316 SYMLINK libspdk_virtio.so 00:03:30.316 CC lib/fsdev/fsdev_rpc.o 00:03:30.316 SO libspdk_init.so.6.0 00:03:30.316 SYMLINK libspdk_init.so 00:03:30.576 CC lib/event/app.o 00:03:30.576 CC lib/event/scheduler_static.o 00:03:30.576 CC lib/event/reactor.o 00:03:30.576 CC lib/event/app_rpc.o 00:03:30.576 CC lib/event/log_rpc.o 00:03:30.576 LIB libspdk_accel.a 00:03:30.576 LIB libspdk_nvme.a 00:03:30.576 SO libspdk_accel.so.16.0 00:03:30.576 SYMLINK libspdk_accel.so 00:03:30.576 SO libspdk_nvme.so.15.0 00:03:30.836 CC lib/bdev/bdev_zone.o 00:03:30.836 CC lib/bdev/scsi_nvme.o 00:03:30.836 CC lib/bdev/bdev.o 00:03:30.836 CC lib/bdev/bdev_rpc.o 00:03:30.836 CC lib/bdev/part.o 00:03:30.836 LIB libspdk_fsdev.a 00:03:30.836 SO libspdk_fsdev.so.2.0 00:03:30.836 SYMLINK libspdk_nvme.so 00:03:30.836 SYMLINK libspdk_fsdev.so 00:03:31.097 LIB libspdk_event.a 00:03:31.097 SO libspdk_event.so.14.0 00:03:31.097 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:31.097 SYMLINK libspdk_event.so 00:03:31.668 LIB libspdk_fuse_dispatcher.a 00:03:31.668 SO libspdk_fuse_dispatcher.so.1.0 00:03:31.668 SYMLINK libspdk_fuse_dispatcher.so 00:03:32.609 LIB libspdk_blob.a 00:03:32.871 SO libspdk_blob.so.12.0 00:03:32.871 SYMLINK libspdk_blob.so 00:03:33.132 CC lib/lvol/lvol.o 00:03:33.132 CC lib/blobfs/blobfs.o 00:03:33.132 CC lib/blobfs/tree.o 00:03:33.393 LIB libspdk_bdev.a 00:03:33.393 SO libspdk_bdev.so.17.0 00:03:33.393 SYMLINK libspdk_bdev.so 00:03:33.693 CC lib/ublk/ublk.o 00:03:33.693 CC lib/ublk/ublk_rpc.o 00:03:33.693 CC lib/nvmf/ctrlr.o 00:03:33.693 CC lib/nvmf/ctrlr_discovery.o 00:03:33.693 CC lib/nvmf/ctrlr_bdev.o 00:03:33.693 CC lib/ftl/ftl_core.o 00:03:33.693 CC lib/scsi/dev.o 00:03:33.693 CC lib/nbd/nbd.o 00:03:33.693 LIB libspdk_blobfs.a 00:03:33.693 SO libspdk_blobfs.so.11.0 00:03:33.693 CC lib/nbd/nbd_rpc.o 00:03:33.693 LIB libspdk_lvol.a 00:03:33.693 SO libspdk_lvol.so.11.0 00:03:33.957 CC lib/scsi/lun.o 00:03:33.957 SYMLINK libspdk_blobfs.so 00:03:33.957 CC lib/ftl/ftl_init.o 00:03:33.957 SYMLINK libspdk_lvol.so 00:03:33.957 CC lib/ftl/ftl_layout.o 00:03:33.957 CC lib/ftl/ftl_debug.o 00:03:33.957 CC lib/ftl/ftl_io.o 00:03:33.957 CC lib/ftl/ftl_sb.o 00:03:33.957 CC lib/scsi/port.o 00:03:33.957 LIB libspdk_nbd.a 00:03:33.957 CC lib/ftl/ftl_l2p.o 00:03:33.957 SO libspdk_nbd.so.7.0 00:03:34.229 CC lib/nvmf/subsystem.o 00:03:34.229 LIB libspdk_ublk.a 00:03:34.229 SYMLINK libspdk_nbd.so 00:03:34.229 CC lib/ftl/ftl_l2p_flat.o 00:03:34.229 CC lib/ftl/ftl_nv_cache.o 00:03:34.229 SO libspdk_ublk.so.3.0 00:03:34.229 CC lib/scsi/scsi.o 00:03:34.229 CC lib/scsi/scsi_bdev.o 00:03:34.229 SYMLINK libspdk_ublk.so 00:03:34.229 CC lib/scsi/scsi_pr.o 00:03:34.229 CC lib/ftl/ftl_band.o 00:03:34.229 CC lib/ftl/ftl_band_ops.o 00:03:34.229 CC lib/ftl/ftl_writer.o 00:03:34.229 CC lib/ftl/ftl_rq.o 00:03:34.229 CC lib/nvmf/nvmf.o 00:03:34.489 CC lib/ftl/ftl_reloc.o 00:03:34.489 CC lib/ftl/ftl_l2p_cache.o 00:03:34.489 CC lib/ftl/ftl_p2l.o 00:03:34.489 CC lib/nvmf/nvmf_rpc.o 00:03:34.489 CC lib/scsi/scsi_rpc.o 00:03:34.489 CC lib/ftl/ftl_p2l_log.o 00:03:34.489 CC lib/scsi/task.o 00:03:34.489 CC lib/nvmf/transport.o 00:03:34.749 CC lib/ftl/mngt/ftl_mngt.o 00:03:34.750 LIB libspdk_scsi.a 00:03:34.750 SO libspdk_scsi.so.9.0 00:03:34.750 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:34.750 CC lib/nvmf/tcp.o 00:03:34.750 SYMLINK libspdk_scsi.so 00:03:35.010 CC lib/nvmf/stubs.o 00:03:35.010 CC lib/nvmf/mdns_server.o 00:03:35.010 CC lib/nvmf/rdma.o 00:03:35.010 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:35.010 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:35.270 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:35.270 CC lib/nvmf/auth.o 00:03:35.270 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:35.270 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:35.270 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:35.270 CC lib/iscsi/conn.o 00:03:35.270 CC lib/vhost/vhost.o 00:03:35.530 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:35.530 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:35.530 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:35.530 CC lib/vhost/vhost_rpc.o 00:03:35.530 CC lib/vhost/vhost_scsi.o 00:03:35.530 CC lib/vhost/vhost_blk.o 00:03:35.530 CC lib/vhost/rte_vhost_user.o 00:03:35.530 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:35.790 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:35.790 CC lib/ftl/utils/ftl_conf.o 00:03:36.049 CC lib/iscsi/init_grp.o 00:03:36.049 CC lib/iscsi/iscsi.o 00:03:36.049 CC lib/ftl/utils/ftl_md.o 00:03:36.049 CC lib/iscsi/param.o 00:03:36.049 CC lib/ftl/utils/ftl_mempool.o 00:03:36.309 CC lib/ftl/utils/ftl_bitmap.o 00:03:36.309 CC lib/iscsi/portal_grp.o 00:03:36.309 CC lib/iscsi/tgt_node.o 00:03:36.309 CC lib/iscsi/iscsi_subsystem.o 00:03:36.309 CC lib/ftl/utils/ftl_property.o 00:03:36.309 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:36.570 CC lib/iscsi/iscsi_rpc.o 00:03:36.570 CC lib/iscsi/task.o 00:03:36.570 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:36.570 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:36.570 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:36.570 LIB libspdk_vhost.a 00:03:36.570 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:36.570 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:36.570 SO libspdk_vhost.so.8.0 00:03:36.570 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:36.831 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:36.832 LIB libspdk_nvmf.a 00:03:36.832 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:36.832 SYMLINK libspdk_vhost.so 00:03:36.832 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:36.832 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:36.832 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:36.832 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:36.832 CC lib/ftl/base/ftl_base_dev.o 00:03:36.832 SO libspdk_nvmf.so.20.0 00:03:36.832 CC lib/ftl/base/ftl_base_bdev.o 00:03:36.832 CC lib/ftl/ftl_trace.o 00:03:37.091 SYMLINK libspdk_nvmf.so 00:03:37.091 LIB libspdk_ftl.a 00:03:37.091 LIB libspdk_iscsi.a 00:03:37.091 SO libspdk_iscsi.so.8.0 00:03:37.350 SO libspdk_ftl.so.9.0 00:03:37.350 SYMLINK libspdk_iscsi.so 00:03:37.350 SYMLINK libspdk_ftl.so 00:03:37.608 CC module/env_dpdk/env_dpdk_rpc.o 00:03:37.608 CC module/accel/ioat/accel_ioat.o 00:03:37.608 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:37.608 CC module/sock/posix/posix.o 00:03:37.608 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:37.608 CC module/scheduler/gscheduler/gscheduler.o 00:03:37.608 CC module/blob/bdev/blob_bdev.o 00:03:37.608 CC module/accel/error/accel_error.o 00:03:37.608 CC module/fsdev/aio/fsdev_aio.o 00:03:37.608 CC module/keyring/file/keyring.o 00:03:37.869 LIB libspdk_env_dpdk_rpc.a 00:03:37.869 SO libspdk_env_dpdk_rpc.so.6.0 00:03:37.869 SYMLINK libspdk_env_dpdk_rpc.so 00:03:37.869 CC module/accel/error/accel_error_rpc.o 00:03:37.869 CC module/keyring/file/keyring_rpc.o 00:03:37.869 CC module/accel/ioat/accel_ioat_rpc.o 00:03:37.869 LIB libspdk_scheduler_dpdk_governor.a 00:03:37.869 LIB libspdk_scheduler_dynamic.a 00:03:37.869 SO libspdk_scheduler_dynamic.so.4.0 00:03:37.869 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:37.869 LIB libspdk_scheduler_gscheduler.a 00:03:37.869 SO libspdk_scheduler_gscheduler.so.4.0 00:03:37.869 LIB libspdk_accel_error.a 00:03:37.869 SYMLINK libspdk_scheduler_dynamic.so 00:03:37.869 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:37.869 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:37.869 CC module/fsdev/aio/linux_aio_mgr.o 00:03:37.869 LIB libspdk_blob_bdev.a 00:03:37.869 SO libspdk_accel_error.so.2.0 00:03:37.869 LIB libspdk_keyring_file.a 00:03:37.869 LIB libspdk_accel_ioat.a 00:03:37.869 SO libspdk_blob_bdev.so.12.0 00:03:37.869 SYMLINK libspdk_scheduler_gscheduler.so 00:03:37.869 SO libspdk_accel_ioat.so.6.0 00:03:37.869 SO libspdk_keyring_file.so.2.0 00:03:37.869 SYMLINK libspdk_blob_bdev.so 00:03:37.869 SYMLINK libspdk_accel_error.so 00:03:38.130 SYMLINK libspdk_keyring_file.so 00:03:38.130 SYMLINK libspdk_accel_ioat.so 00:03:38.130 CC module/accel/dsa/accel_dsa.o 00:03:38.130 CC module/keyring/linux/keyring.o 00:03:38.130 CC module/accel/iaa/accel_iaa.o 00:03:38.130 CC module/bdev/gpt/gpt.o 00:03:38.130 CC module/bdev/lvol/vbdev_lvol.o 00:03:38.130 CC module/bdev/error/vbdev_error.o 00:03:38.130 CC module/blobfs/bdev/blobfs_bdev.o 00:03:38.130 CC module/bdev/delay/vbdev_delay.o 00:03:38.131 CC module/keyring/linux/keyring_rpc.o 00:03:38.391 LIB libspdk_fsdev_aio.a 00:03:38.391 CC module/accel/dsa/accel_dsa_rpc.o 00:03:38.391 LIB libspdk_sock_posix.a 00:03:38.391 LIB libspdk_keyring_linux.a 00:03:38.391 SO libspdk_fsdev_aio.so.1.0 00:03:38.391 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:38.391 SO libspdk_sock_posix.so.6.0 00:03:38.391 CC module/accel/iaa/accel_iaa_rpc.o 00:03:38.391 CC module/bdev/gpt/vbdev_gpt.o 00:03:38.391 SO libspdk_keyring_linux.so.1.0 00:03:38.391 SYMLINK libspdk_fsdev_aio.so 00:03:38.391 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:38.391 SYMLINK libspdk_keyring_linux.so 00:03:38.391 SYMLINK libspdk_sock_posix.so 00:03:38.391 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:38.391 CC module/bdev/error/vbdev_error_rpc.o 00:03:38.391 LIB libspdk_accel_dsa.a 00:03:38.391 SO libspdk_accel_dsa.so.5.0 00:03:38.391 LIB libspdk_accel_iaa.a 00:03:38.391 SO libspdk_accel_iaa.so.3.0 00:03:38.391 LIB libspdk_blobfs_bdev.a 00:03:38.391 SYMLINK libspdk_accel_dsa.so 00:03:38.391 SYMLINK libspdk_accel_iaa.so 00:03:38.391 SO libspdk_blobfs_bdev.so.6.0 00:03:38.391 CC module/bdev/malloc/bdev_malloc.o 00:03:38.392 LIB libspdk_bdev_error.a 00:03:38.654 LIB libspdk_bdev_gpt.a 00:03:38.654 SYMLINK libspdk_blobfs_bdev.so 00:03:38.654 SO libspdk_bdev_error.so.6.0 00:03:38.654 LIB libspdk_bdev_delay.a 00:03:38.654 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:38.654 SO libspdk_bdev_gpt.so.6.0 00:03:38.654 SO libspdk_bdev_delay.so.6.0 00:03:38.654 SYMLINK libspdk_bdev_error.so 00:03:38.654 CC module/bdev/nvme/bdev_nvme.o 00:03:38.654 SYMLINK libspdk_bdev_gpt.so 00:03:38.654 CC module/bdev/null/bdev_null.o 00:03:38.654 SYMLINK libspdk_bdev_delay.so 00:03:38.654 CC module/bdev/passthru/vbdev_passthru.o 00:03:38.654 LIB libspdk_bdev_lvol.a 00:03:38.654 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:38.654 SO libspdk_bdev_lvol.so.6.0 00:03:38.654 CC module/bdev/raid/bdev_raid.o 00:03:38.654 CC module/bdev/split/vbdev_split.o 00:03:38.654 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:38.654 SYMLINK libspdk_bdev_lvol.so 00:03:38.654 CC module/bdev/xnvme/bdev_xnvme.o 00:03:38.916 LIB libspdk_bdev_malloc.a 00:03:38.916 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:38.916 CC module/bdev/null/bdev_null_rpc.o 00:03:38.916 SO libspdk_bdev_malloc.so.6.0 00:03:38.916 SYMLINK libspdk_bdev_malloc.so 00:03:38.916 CC module/bdev/aio/bdev_aio.o 00:03:38.916 CC module/bdev/split/vbdev_split_rpc.o 00:03:38.916 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:38.916 CC module/bdev/aio/bdev_aio_rpc.o 00:03:38.916 LIB libspdk_bdev_passthru.a 00:03:38.916 LIB libspdk_bdev_null.a 00:03:38.916 SO libspdk_bdev_passthru.so.6.0 00:03:38.916 SO libspdk_bdev_null.so.6.0 00:03:38.916 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:38.916 LIB libspdk_bdev_zone_block.a 00:03:38.916 SYMLINK libspdk_bdev_passthru.so 00:03:38.916 SYMLINK libspdk_bdev_null.so 00:03:38.916 CC module/bdev/raid/bdev_raid_rpc.o 00:03:38.916 LIB libspdk_bdev_split.a 00:03:38.916 SO libspdk_bdev_zone_block.so.6.0 00:03:38.916 SO libspdk_bdev_split.so.6.0 00:03:39.177 SYMLINK libspdk_bdev_zone_block.so 00:03:39.177 SYMLINK libspdk_bdev_split.so 00:03:39.177 LIB libspdk_bdev_xnvme.a 00:03:39.177 CC module/bdev/raid/bdev_raid_sb.o 00:03:39.177 SO libspdk_bdev_xnvme.so.3.0 00:03:39.177 CC module/bdev/ftl/bdev_ftl.o 00:03:39.177 SYMLINK libspdk_bdev_xnvme.so 00:03:39.177 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:39.177 CC module/bdev/raid/raid0.o 00:03:39.177 CC module/bdev/iscsi/bdev_iscsi.o 00:03:39.177 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:39.177 LIB libspdk_bdev_aio.a 00:03:39.177 SO libspdk_bdev_aio.so.6.0 00:03:39.177 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:39.439 SYMLINK libspdk_bdev_aio.so 00:03:39.439 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:39.439 CC module/bdev/nvme/nvme_rpc.o 00:03:39.439 LIB libspdk_bdev_ftl.a 00:03:39.439 SO libspdk_bdev_ftl.so.6.0 00:03:39.439 CC module/bdev/nvme/bdev_mdns_client.o 00:03:39.439 SYMLINK libspdk_bdev_ftl.so 00:03:39.439 CC module/bdev/nvme/vbdev_opal.o 00:03:39.439 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:39.439 LIB libspdk_bdev_iscsi.a 00:03:39.439 SO libspdk_bdev_iscsi.so.6.0 00:03:39.439 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:39.439 CC module/bdev/raid/raid1.o 00:03:39.439 SYMLINK libspdk_bdev_iscsi.so 00:03:39.439 CC module/bdev/raid/concat.o 00:03:39.439 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:39.700 LIB libspdk_bdev_raid.a 00:03:39.700 LIB libspdk_bdev_virtio.a 00:03:39.700 SO libspdk_bdev_virtio.so.6.0 00:03:39.700 SO libspdk_bdev_raid.so.6.0 00:03:39.961 SYMLINK libspdk_bdev_virtio.so 00:03:39.961 SYMLINK libspdk_bdev_raid.so 00:03:41.356 LIB libspdk_bdev_nvme.a 00:03:41.356 SO libspdk_bdev_nvme.so.7.1 00:03:41.356 SYMLINK libspdk_bdev_nvme.so 00:03:41.617 CC module/event/subsystems/sock/sock.o 00:03:41.617 CC module/event/subsystems/keyring/keyring.o 00:03:41.617 CC module/event/subsystems/vmd/vmd.o 00:03:41.617 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:41.617 CC module/event/subsystems/fsdev/fsdev.o 00:03:41.617 CC module/event/subsystems/scheduler/scheduler.o 00:03:41.617 CC module/event/subsystems/iobuf/iobuf.o 00:03:41.617 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:41.617 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:41.878 LIB libspdk_event_scheduler.a 00:03:41.878 LIB libspdk_event_keyring.a 00:03:41.878 LIB libspdk_event_vmd.a 00:03:41.878 SO libspdk_event_scheduler.so.4.0 00:03:41.878 SO libspdk_event_keyring.so.1.0 00:03:41.878 LIB libspdk_event_sock.a 00:03:41.878 LIB libspdk_event_vhost_blk.a 00:03:41.878 LIB libspdk_event_fsdev.a 00:03:41.878 LIB libspdk_event_iobuf.a 00:03:41.878 SO libspdk_event_sock.so.5.0 00:03:41.878 SO libspdk_event_fsdev.so.1.0 00:03:41.878 SO libspdk_event_vmd.so.6.0 00:03:41.878 SO libspdk_event_vhost_blk.so.3.0 00:03:41.878 SO libspdk_event_iobuf.so.3.0 00:03:41.878 SYMLINK libspdk_event_scheduler.so 00:03:41.878 SYMLINK libspdk_event_keyring.so 00:03:41.878 SYMLINK libspdk_event_fsdev.so 00:03:41.878 SYMLINK libspdk_event_vhost_blk.so 00:03:41.878 SYMLINK libspdk_event_sock.so 00:03:41.878 SYMLINK libspdk_event_vmd.so 00:03:41.878 SYMLINK libspdk_event_iobuf.so 00:03:42.141 CC module/event/subsystems/accel/accel.o 00:03:42.141 LIB libspdk_event_accel.a 00:03:42.141 SO libspdk_event_accel.so.6.0 00:03:42.402 SYMLINK libspdk_event_accel.so 00:03:42.664 CC module/event/subsystems/bdev/bdev.o 00:03:42.664 LIB libspdk_event_bdev.a 00:03:42.664 SO libspdk_event_bdev.so.6.0 00:03:42.664 SYMLINK libspdk_event_bdev.so 00:03:42.925 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:42.925 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:42.925 CC module/event/subsystems/nbd/nbd.o 00:03:42.925 CC module/event/subsystems/scsi/scsi.o 00:03:42.925 CC module/event/subsystems/ublk/ublk.o 00:03:42.925 LIB libspdk_event_nbd.a 00:03:42.925 LIB libspdk_event_ublk.a 00:03:42.925 SO libspdk_event_nbd.so.6.0 00:03:42.925 LIB libspdk_event_scsi.a 00:03:42.925 SO libspdk_event_ublk.so.3.0 00:03:42.925 SO libspdk_event_scsi.so.6.0 00:03:43.187 SYMLINK libspdk_event_nbd.so 00:03:43.187 SYMLINK libspdk_event_scsi.so 00:03:43.187 SYMLINK libspdk_event_ublk.so 00:03:43.187 LIB libspdk_event_nvmf.a 00:03:43.187 SO libspdk_event_nvmf.so.6.0 00:03:43.187 SYMLINK libspdk_event_nvmf.so 00:03:43.187 CC module/event/subsystems/iscsi/iscsi.o 00:03:43.187 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:43.447 LIB libspdk_event_iscsi.a 00:03:43.447 SO libspdk_event_iscsi.so.6.0 00:03:43.447 LIB libspdk_event_vhost_scsi.a 00:03:43.447 SO libspdk_event_vhost_scsi.so.3.0 00:03:43.447 SYMLINK libspdk_event_iscsi.so 00:03:43.447 SYMLINK libspdk_event_vhost_scsi.so 00:03:43.708 SO libspdk.so.6.0 00:03:43.708 SYMLINK libspdk.so 00:03:43.708 CXX app/trace/trace.o 00:03:43.708 CC app/trace_record/trace_record.o 00:03:43.708 CC test/rpc_client/rpc_client_test.o 00:03:43.708 TEST_HEADER include/spdk/accel.h 00:03:43.708 TEST_HEADER include/spdk/accel_module.h 00:03:43.708 TEST_HEADER include/spdk/assert.h 00:03:43.708 TEST_HEADER include/spdk/barrier.h 00:03:43.708 TEST_HEADER include/spdk/base64.h 00:03:43.708 TEST_HEADER include/spdk/bdev.h 00:03:43.708 TEST_HEADER include/spdk/bdev_module.h 00:03:43.708 TEST_HEADER include/spdk/bdev_zone.h 00:03:43.708 TEST_HEADER include/spdk/bit_array.h 00:03:43.708 TEST_HEADER include/spdk/bit_pool.h 00:03:43.708 TEST_HEADER include/spdk/blob_bdev.h 00:03:43.708 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:43.708 TEST_HEADER include/spdk/blobfs.h 00:03:43.708 TEST_HEADER include/spdk/blob.h 00:03:43.708 TEST_HEADER include/spdk/conf.h 00:03:43.708 TEST_HEADER include/spdk/config.h 00:03:43.708 TEST_HEADER include/spdk/cpuset.h 00:03:43.708 TEST_HEADER include/spdk/crc16.h 00:03:43.708 TEST_HEADER include/spdk/crc32.h 00:03:43.708 TEST_HEADER include/spdk/crc64.h 00:03:43.708 TEST_HEADER include/spdk/dif.h 00:03:43.708 TEST_HEADER include/spdk/dma.h 00:03:43.708 TEST_HEADER include/spdk/endian.h 00:03:43.708 TEST_HEADER include/spdk/env_dpdk.h 00:03:43.708 TEST_HEADER include/spdk/env.h 00:03:43.708 TEST_HEADER include/spdk/event.h 00:03:43.708 TEST_HEADER include/spdk/fd_group.h 00:03:43.708 TEST_HEADER include/spdk/fd.h 00:03:43.708 TEST_HEADER include/spdk/file.h 00:03:43.708 TEST_HEADER include/spdk/fsdev.h 00:03:43.708 TEST_HEADER include/spdk/fsdev_module.h 00:03:43.708 TEST_HEADER include/spdk/ftl.h 00:03:43.708 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:43.708 TEST_HEADER include/spdk/gpt_spec.h 00:03:43.708 TEST_HEADER include/spdk/hexlify.h 00:03:43.708 TEST_HEADER include/spdk/histogram_data.h 00:03:43.708 CC test/thread/poller_perf/poller_perf.o 00:03:43.708 TEST_HEADER include/spdk/idxd.h 00:03:43.708 TEST_HEADER include/spdk/idxd_spec.h 00:03:43.708 TEST_HEADER include/spdk/init.h 00:03:43.708 CC app/nvmf_tgt/nvmf_main.o 00:03:43.708 TEST_HEADER include/spdk/ioat.h 00:03:43.708 CC examples/util/zipf/zipf.o 00:03:43.708 TEST_HEADER include/spdk/ioat_spec.h 00:03:43.968 TEST_HEADER include/spdk/iscsi_spec.h 00:03:43.968 TEST_HEADER include/spdk/json.h 00:03:43.968 TEST_HEADER include/spdk/jsonrpc.h 00:03:43.968 TEST_HEADER include/spdk/keyring.h 00:03:43.968 TEST_HEADER include/spdk/keyring_module.h 00:03:43.968 TEST_HEADER include/spdk/likely.h 00:03:43.968 TEST_HEADER include/spdk/log.h 00:03:43.968 TEST_HEADER include/spdk/lvol.h 00:03:43.968 TEST_HEADER include/spdk/md5.h 00:03:43.968 TEST_HEADER include/spdk/memory.h 00:03:43.968 TEST_HEADER include/spdk/mmio.h 00:03:43.968 TEST_HEADER include/spdk/nbd.h 00:03:43.968 CC test/dma/test_dma/test_dma.o 00:03:43.968 TEST_HEADER include/spdk/net.h 00:03:43.968 TEST_HEADER include/spdk/notify.h 00:03:43.968 CC test/app/bdev_svc/bdev_svc.o 00:03:43.968 TEST_HEADER include/spdk/nvme.h 00:03:43.968 TEST_HEADER include/spdk/nvme_intel.h 00:03:43.968 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:43.968 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:43.968 TEST_HEADER include/spdk/nvme_spec.h 00:03:43.968 TEST_HEADER include/spdk/nvme_zns.h 00:03:43.969 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:43.969 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:43.969 TEST_HEADER include/spdk/nvmf.h 00:03:43.969 TEST_HEADER include/spdk/nvmf_spec.h 00:03:43.969 TEST_HEADER include/spdk/nvmf_transport.h 00:03:43.969 TEST_HEADER include/spdk/opal.h 00:03:43.969 TEST_HEADER include/spdk/opal_spec.h 00:03:43.969 TEST_HEADER include/spdk/pci_ids.h 00:03:43.969 CC test/env/mem_callbacks/mem_callbacks.o 00:03:43.969 TEST_HEADER include/spdk/pipe.h 00:03:43.969 TEST_HEADER include/spdk/queue.h 00:03:43.969 TEST_HEADER include/spdk/reduce.h 00:03:43.969 TEST_HEADER include/spdk/rpc.h 00:03:43.969 TEST_HEADER include/spdk/scheduler.h 00:03:43.969 TEST_HEADER include/spdk/scsi.h 00:03:43.969 TEST_HEADER include/spdk/scsi_spec.h 00:03:43.969 TEST_HEADER include/spdk/sock.h 00:03:43.969 TEST_HEADER include/spdk/stdinc.h 00:03:43.969 TEST_HEADER include/spdk/string.h 00:03:43.969 TEST_HEADER include/spdk/thread.h 00:03:43.969 TEST_HEADER include/spdk/trace.h 00:03:43.969 TEST_HEADER include/spdk/trace_parser.h 00:03:43.969 TEST_HEADER include/spdk/tree.h 00:03:43.969 TEST_HEADER include/spdk/ublk.h 00:03:43.969 TEST_HEADER include/spdk/util.h 00:03:43.969 LINK rpc_client_test 00:03:43.969 TEST_HEADER include/spdk/uuid.h 00:03:43.969 TEST_HEADER include/spdk/version.h 00:03:43.969 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:43.969 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:43.969 TEST_HEADER include/spdk/vhost.h 00:03:43.969 TEST_HEADER include/spdk/vmd.h 00:03:43.969 TEST_HEADER include/spdk/xor.h 00:03:43.969 TEST_HEADER include/spdk/zipf.h 00:03:43.969 CXX test/cpp_headers/accel.o 00:03:43.969 LINK poller_perf 00:03:43.969 LINK nvmf_tgt 00:03:43.969 LINK zipf 00:03:43.969 LINK spdk_trace_record 00:03:43.969 CXX test/cpp_headers/accel_module.o 00:03:43.969 LINK bdev_svc 00:03:43.969 CXX test/cpp_headers/assert.o 00:03:43.969 CXX test/cpp_headers/barrier.o 00:03:44.230 LINK spdk_trace 00:03:44.230 CXX test/cpp_headers/base64.o 00:03:44.230 CXX test/cpp_headers/bdev.o 00:03:44.230 CC test/event/event_perf/event_perf.o 00:03:44.230 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:44.230 CC examples/ioat/perf/perf.o 00:03:44.230 CC examples/vmd/lsvmd/lsvmd.o 00:03:44.230 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:44.230 LINK test_dma 00:03:44.230 LINK event_perf 00:03:44.230 CC app/iscsi_tgt/iscsi_tgt.o 00:03:44.230 CXX test/cpp_headers/bdev_module.o 00:03:44.490 LINK mem_callbacks 00:03:44.490 LINK lsvmd 00:03:44.490 CC test/app/histogram_perf/histogram_perf.o 00:03:44.490 LINK ioat_perf 00:03:44.490 CC test/event/reactor/reactor.o 00:03:44.490 CXX test/cpp_headers/bdev_zone.o 00:03:44.490 LINK iscsi_tgt 00:03:44.490 LINK histogram_perf 00:03:44.490 CC test/env/vtophys/vtophys.o 00:03:44.490 CC test/event/reactor_perf/reactor_perf.o 00:03:44.490 CC examples/vmd/led/led.o 00:03:44.490 CC examples/ioat/verify/verify.o 00:03:44.751 LINK reactor 00:03:44.751 LINK nvme_fuzz 00:03:44.751 CXX test/cpp_headers/bit_array.o 00:03:44.751 LINK reactor_perf 00:03:44.751 LINK vtophys 00:03:44.751 LINK led 00:03:44.751 CXX test/cpp_headers/bit_pool.o 00:03:44.751 CXX test/cpp_headers/blob_bdev.o 00:03:44.751 CC test/accel/dif/dif.o 00:03:44.751 LINK verify 00:03:44.751 CC app/spdk_tgt/spdk_tgt.o 00:03:44.751 CC test/event/app_repeat/app_repeat.o 00:03:44.751 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:44.751 CXX test/cpp_headers/blobfs_bdev.o 00:03:45.012 CC test/event/scheduler/scheduler.o 00:03:45.012 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:45.012 LINK spdk_tgt 00:03:45.012 LINK env_dpdk_post_init 00:03:45.012 LINK app_repeat 00:03:45.012 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:45.012 CXX test/cpp_headers/blobfs.o 00:03:45.012 CC examples/idxd/perf/perf.o 00:03:45.012 CC test/blobfs/mkfs/mkfs.o 00:03:45.012 LINK scheduler 00:03:45.274 CXX test/cpp_headers/blob.o 00:03:45.274 CC test/env/memory/memory_ut.o 00:03:45.274 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:45.274 CC app/spdk_lspci/spdk_lspci.o 00:03:45.274 LINK mkfs 00:03:45.274 CXX test/cpp_headers/conf.o 00:03:45.274 LINK spdk_lspci 00:03:45.274 CC test/env/pci/pci_ut.o 00:03:45.274 LINK idxd_perf 00:03:45.274 LINK interrupt_tgt 00:03:45.536 CXX test/cpp_headers/config.o 00:03:45.536 LINK vhost_fuzz 00:03:45.536 CXX test/cpp_headers/cpuset.o 00:03:45.536 LINK dif 00:03:45.536 CC app/spdk_nvme_perf/perf.o 00:03:45.536 CC examples/thread/thread/thread_ex.o 00:03:45.536 CXX test/cpp_headers/crc16.o 00:03:45.536 CC test/nvme/aer/aer.o 00:03:45.536 CC examples/sock/hello_world/hello_sock.o 00:03:45.796 CC test/lvol/esnap/esnap.o 00:03:45.796 CC test/nvme/reset/reset.o 00:03:45.796 CXX test/cpp_headers/crc32.o 00:03:45.796 LINK pci_ut 00:03:45.796 LINK thread 00:03:45.796 LINK aer 00:03:45.796 LINK iscsi_fuzz 00:03:45.796 CXX test/cpp_headers/crc64.o 00:03:45.796 LINK hello_sock 00:03:46.057 LINK reset 00:03:46.057 CC test/nvme/sgl/sgl.o 00:03:46.057 CXX test/cpp_headers/dif.o 00:03:46.057 CC test/app/jsoncat/jsoncat.o 00:03:46.057 CC test/app/stub/stub.o 00:03:46.057 CC examples/accel/perf/accel_perf.o 00:03:46.057 LINK spdk_nvme_perf 00:03:46.057 CC test/bdev/bdevio/bdevio.o 00:03:46.057 LINK jsoncat 00:03:46.057 CC test/nvme/e2edp/nvme_dp.o 00:03:46.057 CXX test/cpp_headers/dma.o 00:03:46.319 LINK stub 00:03:46.319 LINK sgl 00:03:46.319 LINK memory_ut 00:03:46.319 CC app/spdk_nvme_identify/identify.o 00:03:46.319 CXX test/cpp_headers/endian.o 00:03:46.319 CC app/spdk_nvme_discover/discovery_aer.o 00:03:46.319 CC app/spdk_top/spdk_top.o 00:03:46.319 LINK nvme_dp 00:03:46.319 CC test/nvme/overhead/overhead.o 00:03:46.578 CXX test/cpp_headers/env_dpdk.o 00:03:46.578 LINK spdk_nvme_discover 00:03:46.578 LINK bdevio 00:03:46.578 CC examples/blob/hello_world/hello_blob.o 00:03:46.578 LINK accel_perf 00:03:46.578 CXX test/cpp_headers/env.o 00:03:46.578 LINK overhead 00:03:46.578 CC examples/blob/cli/blobcli.o 00:03:46.578 CC app/vhost/vhost.o 00:03:46.839 LINK hello_blob 00:03:46.839 CC examples/nvme/hello_world/hello_world.o 00:03:46.839 CXX test/cpp_headers/event.o 00:03:46.839 LINK vhost 00:03:46.839 CC test/nvme/err_injection/err_injection.o 00:03:46.839 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:47.101 CXX test/cpp_headers/fd_group.o 00:03:47.101 LINK err_injection 00:03:47.101 LINK blobcli 00:03:47.101 LINK hello_world 00:03:47.101 CC app/spdk_dd/spdk_dd.o 00:03:47.101 CXX test/cpp_headers/fd.o 00:03:47.101 LINK spdk_nvme_identify 00:03:47.101 CC examples/bdev/hello_world/hello_bdev.o 00:03:47.101 CXX test/cpp_headers/file.o 00:03:47.101 LINK hello_fsdev 00:03:47.101 CC test/nvme/startup/startup.o 00:03:47.101 CC examples/nvme/reconnect/reconnect.o 00:03:47.381 CXX test/cpp_headers/fsdev.o 00:03:47.381 LINK spdk_top 00:03:47.381 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:47.381 LINK hello_bdev 00:03:47.381 LINK startup 00:03:47.381 CC app/fio/nvme/fio_plugin.o 00:03:47.382 CXX test/cpp_headers/fsdev_module.o 00:03:47.382 LINK spdk_dd 00:03:47.382 CXX test/cpp_headers/ftl.o 00:03:47.382 CC app/fio/bdev/fio_plugin.o 00:03:47.643 CXX test/cpp_headers/fuse_dispatcher.o 00:03:47.643 CXX test/cpp_headers/gpt_spec.o 00:03:47.643 CC examples/bdev/bdevperf/bdevperf.o 00:03:47.643 LINK reconnect 00:03:47.643 CXX test/cpp_headers/hexlify.o 00:03:47.643 CC test/nvme/reserve/reserve.o 00:03:47.643 CXX test/cpp_headers/histogram_data.o 00:03:47.643 CXX test/cpp_headers/idxd.o 00:03:47.643 CXX test/cpp_headers/idxd_spec.o 00:03:47.643 LINK reserve 00:03:47.643 CC examples/nvme/arbitration/arbitration.o 00:03:47.905 LINK spdk_bdev 00:03:47.905 LINK nvme_manage 00:03:47.905 CXX test/cpp_headers/init.o 00:03:47.905 CXX test/cpp_headers/ioat.o 00:03:47.905 CXX test/cpp_headers/ioat_spec.o 00:03:47.905 CC test/nvme/simple_copy/simple_copy.o 00:03:47.905 CC examples/nvme/hotplug/hotplug.o 00:03:47.905 LINK spdk_nvme 00:03:47.905 LINK arbitration 00:03:47.905 CXX test/cpp_headers/iscsi_spec.o 00:03:47.905 CXX test/cpp_headers/json.o 00:03:47.905 CXX test/cpp_headers/jsonrpc.o 00:03:47.905 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:47.905 CC examples/nvme/abort/abort.o 00:03:48.167 CXX test/cpp_headers/keyring.o 00:03:48.167 CXX test/cpp_headers/keyring_module.o 00:03:48.167 LINK simple_copy 00:03:48.167 LINK hotplug 00:03:48.167 CXX test/cpp_headers/likely.o 00:03:48.167 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:48.167 CXX test/cpp_headers/log.o 00:03:48.167 LINK cmb_copy 00:03:48.167 CXX test/cpp_headers/lvol.o 00:03:48.167 CXX test/cpp_headers/md5.o 00:03:48.167 CXX test/cpp_headers/memory.o 00:03:48.167 CC test/nvme/connect_stress/connect_stress.o 00:03:48.167 CXX test/cpp_headers/mmio.o 00:03:48.429 LINK pmr_persistence 00:03:48.429 CXX test/cpp_headers/nbd.o 00:03:48.429 LINK bdevperf 00:03:48.429 CXX test/cpp_headers/net.o 00:03:48.429 LINK abort 00:03:48.429 CC test/nvme/boot_partition/boot_partition.o 00:03:48.429 CXX test/cpp_headers/notify.o 00:03:48.429 CXX test/cpp_headers/nvme.o 00:03:48.429 CXX test/cpp_headers/nvme_intel.o 00:03:48.429 LINK connect_stress 00:03:48.429 CXX test/cpp_headers/nvme_ocssd.o 00:03:48.429 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:48.429 CC test/nvme/compliance/nvme_compliance.o 00:03:48.429 CC test/nvme/fused_ordering/fused_ordering.o 00:03:48.429 LINK boot_partition 00:03:48.692 CXX test/cpp_headers/nvme_spec.o 00:03:48.692 CXX test/cpp_headers/nvme_zns.o 00:03:48.692 CXX test/cpp_headers/nvmf_cmd.o 00:03:48.692 CC examples/nvmf/nvmf/nvmf.o 00:03:48.692 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:48.692 LINK fused_ordering 00:03:48.692 CC test/nvme/fdp/fdp.o 00:03:48.692 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:48.692 CC test/nvme/cuse/cuse.o 00:03:48.692 CXX test/cpp_headers/nvmf.o 00:03:48.692 LINK nvme_compliance 00:03:48.692 CXX test/cpp_headers/nvmf_spec.o 00:03:48.953 CXX test/cpp_headers/nvmf_transport.o 00:03:48.953 CXX test/cpp_headers/opal.o 00:03:48.953 CXX test/cpp_headers/opal_spec.o 00:03:48.953 LINK doorbell_aers 00:03:48.953 CXX test/cpp_headers/pci_ids.o 00:03:48.953 LINK nvmf 00:03:48.953 CXX test/cpp_headers/pipe.o 00:03:48.953 CXX test/cpp_headers/queue.o 00:03:48.953 CXX test/cpp_headers/reduce.o 00:03:48.953 CXX test/cpp_headers/rpc.o 00:03:48.953 CXX test/cpp_headers/scheduler.o 00:03:48.953 CXX test/cpp_headers/scsi.o 00:03:48.953 LINK fdp 00:03:48.953 CXX test/cpp_headers/scsi_spec.o 00:03:48.953 CXX test/cpp_headers/sock.o 00:03:49.213 CXX test/cpp_headers/stdinc.o 00:03:49.213 CXX test/cpp_headers/string.o 00:03:49.213 CXX test/cpp_headers/thread.o 00:03:49.213 CXX test/cpp_headers/trace.o 00:03:49.213 CXX test/cpp_headers/trace_parser.o 00:03:49.213 CXX test/cpp_headers/tree.o 00:03:49.213 CXX test/cpp_headers/ublk.o 00:03:49.213 CXX test/cpp_headers/util.o 00:03:49.213 CXX test/cpp_headers/uuid.o 00:03:49.213 CXX test/cpp_headers/version.o 00:03:49.213 CXX test/cpp_headers/vfio_user_pci.o 00:03:49.213 CXX test/cpp_headers/vfio_user_spec.o 00:03:49.213 CXX test/cpp_headers/vhost.o 00:03:49.213 CXX test/cpp_headers/vmd.o 00:03:49.213 CXX test/cpp_headers/xor.o 00:03:49.213 CXX test/cpp_headers/zipf.o 00:03:49.785 LINK cuse 00:03:50.752 LINK esnap 00:03:51.322 00:03:51.322 real 1m4.388s 00:03:51.322 user 5m57.751s 00:03:51.322 sys 1m3.658s 00:03:51.322 ************************************ 00:03:51.322 16:50:25 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:51.322 16:50:25 make -- common/autotest_common.sh@10 -- $ set +x 00:03:51.322 END TEST make 00:03:51.322 ************************************ 00:03:51.322 16:50:25 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:51.322 16:50:25 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:51.322 16:50:25 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:51.322 16:50:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.322 16:50:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:51.322 16:50:25 -- pm/common@44 -- $ pid=5086 00:03:51.322 16:50:25 -- pm/common@50 -- $ kill -TERM 5086 00:03:51.322 16:50:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.322 16:50:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:51.322 16:50:25 -- pm/common@44 -- $ pid=5088 00:03:51.322 16:50:25 -- pm/common@50 -- $ kill -TERM 5088 00:03:51.322 16:50:25 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:51.322 16:50:25 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:51.322 16:50:25 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:51.322 16:50:25 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:51.322 16:50:25 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:51.322 16:50:25 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:51.322 16:50:25 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:51.322 16:50:25 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:51.322 16:50:25 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:51.322 16:50:25 -- scripts/common.sh@336 -- # IFS=.-: 00:03:51.322 16:50:25 -- scripts/common.sh@336 -- # read -ra ver1 00:03:51.322 16:50:25 -- scripts/common.sh@337 -- # IFS=.-: 00:03:51.322 16:50:25 -- scripts/common.sh@337 -- # read -ra ver2 00:03:51.322 16:50:25 -- scripts/common.sh@338 -- # local 'op=<' 00:03:51.322 16:50:25 -- scripts/common.sh@340 -- # ver1_l=2 00:03:51.322 16:50:25 -- scripts/common.sh@341 -- # ver2_l=1 00:03:51.322 16:50:25 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:51.322 16:50:25 -- scripts/common.sh@344 -- # case "$op" in 00:03:51.322 16:50:25 -- scripts/common.sh@345 -- # : 1 00:03:51.322 16:50:25 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:51.322 16:50:25 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:51.322 16:50:25 -- scripts/common.sh@365 -- # decimal 1 00:03:51.322 16:50:25 -- scripts/common.sh@353 -- # local d=1 00:03:51.322 16:50:25 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:51.322 16:50:25 -- scripts/common.sh@355 -- # echo 1 00:03:51.322 16:50:25 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:51.322 16:50:25 -- scripts/common.sh@366 -- # decimal 2 00:03:51.322 16:50:25 -- scripts/common.sh@353 -- # local d=2 00:03:51.322 16:50:25 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:51.322 16:50:25 -- scripts/common.sh@355 -- # echo 2 00:03:51.322 16:50:25 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:51.322 16:50:25 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:51.322 16:50:25 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:51.322 16:50:25 -- scripts/common.sh@368 -- # return 0 00:03:51.322 16:50:25 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:51.322 16:50:25 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:51.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.322 --rc genhtml_branch_coverage=1 00:03:51.322 --rc genhtml_function_coverage=1 00:03:51.322 --rc genhtml_legend=1 00:03:51.322 --rc geninfo_all_blocks=1 00:03:51.322 --rc geninfo_unexecuted_blocks=1 00:03:51.322 00:03:51.322 ' 00:03:51.322 16:50:25 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:51.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.322 --rc genhtml_branch_coverage=1 00:03:51.322 --rc genhtml_function_coverage=1 00:03:51.322 --rc genhtml_legend=1 00:03:51.322 --rc geninfo_all_blocks=1 00:03:51.322 --rc geninfo_unexecuted_blocks=1 00:03:51.322 00:03:51.322 ' 00:03:51.322 16:50:25 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:51.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.322 --rc genhtml_branch_coverage=1 00:03:51.322 --rc genhtml_function_coverage=1 00:03:51.322 --rc genhtml_legend=1 00:03:51.322 --rc geninfo_all_blocks=1 00:03:51.322 --rc geninfo_unexecuted_blocks=1 00:03:51.322 00:03:51.322 ' 00:03:51.322 16:50:25 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:51.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:51.322 --rc genhtml_branch_coverage=1 00:03:51.322 --rc genhtml_function_coverage=1 00:03:51.322 --rc genhtml_legend=1 00:03:51.322 --rc geninfo_all_blocks=1 00:03:51.322 --rc geninfo_unexecuted_blocks=1 00:03:51.322 00:03:51.322 ' 00:03:51.322 16:50:25 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:51.322 16:50:25 -- nvmf/common.sh@7 -- # uname -s 00:03:51.322 16:50:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:51.322 16:50:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:51.322 16:50:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:51.322 16:50:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:51.322 16:50:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:51.322 16:50:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:51.322 16:50:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:51.322 16:50:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:51.322 16:50:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:51.322 16:50:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:51.322 16:50:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2138609c-c320-4c7c-acc3-736a9e124d02 00:03:51.322 16:50:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=2138609c-c320-4c7c-acc3-736a9e124d02 00:03:51.322 16:50:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:51.322 16:50:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:51.323 16:50:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:51.323 16:50:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:51.323 16:50:25 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:51.323 16:50:25 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:51.323 16:50:25 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:51.323 16:50:25 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:51.323 16:50:25 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:51.323 16:50:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.323 16:50:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.323 16:50:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.323 16:50:25 -- paths/export.sh@5 -- # export PATH 00:03:51.323 16:50:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:51.323 16:50:25 -- nvmf/common.sh@51 -- # : 0 00:03:51.323 16:50:25 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:51.323 16:50:25 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:51.323 16:50:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:51.323 16:50:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:51.323 16:50:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:51.323 16:50:25 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:51.323 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:51.323 16:50:25 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:51.323 16:50:25 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:51.323 16:50:25 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:51.323 16:50:25 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:51.323 16:50:25 -- spdk/autotest.sh@32 -- # uname -s 00:03:51.323 16:50:25 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:51.323 16:50:25 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:51.323 16:50:25 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:51.323 16:50:25 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:51.323 16:50:25 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:51.323 16:50:25 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:51.323 16:50:25 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:51.323 16:50:25 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:51.323 16:50:25 -- spdk/autotest.sh@48 -- # udevadm_pid=54229 00:03:51.323 16:50:25 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:51.323 16:50:25 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:51.323 16:50:25 -- pm/common@17 -- # local monitor 00:03:51.323 16:50:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.323 16:50:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.323 16:50:25 -- pm/common@25 -- # sleep 1 00:03:51.323 16:50:25 -- pm/common@21 -- # date +%s 00:03:51.323 16:50:25 -- pm/common@21 -- # date +%s 00:03:51.323 16:50:25 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733417425 00:03:51.323 16:50:25 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733417425 00:03:51.583 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733417425_collect-cpu-load.pm.log 00:03:51.583 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733417425_collect-vmstat.pm.log 00:03:52.524 16:50:26 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:52.524 16:50:26 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:52.524 16:50:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:52.524 16:50:26 -- common/autotest_common.sh@10 -- # set +x 00:03:52.524 16:50:26 -- spdk/autotest.sh@59 -- # create_test_list 00:03:52.524 16:50:26 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:52.524 16:50:26 -- common/autotest_common.sh@10 -- # set +x 00:03:52.524 16:50:26 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:52.524 16:50:26 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:52.524 16:50:26 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:52.524 16:50:26 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:52.524 16:50:26 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:52.524 16:50:26 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:52.524 16:50:26 -- common/autotest_common.sh@1457 -- # uname 00:03:52.524 16:50:26 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:52.524 16:50:26 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:52.524 16:50:26 -- common/autotest_common.sh@1477 -- # uname 00:03:52.525 16:50:26 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:52.525 16:50:26 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:52.525 16:50:26 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:52.525 lcov: LCOV version 1.15 00:03:52.525 16:50:26 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:04.783 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:04.783 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:17.016 16:50:51 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:17.016 16:50:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:17.016 16:50:51 -- common/autotest_common.sh@10 -- # set +x 00:04:17.016 16:50:51 -- spdk/autotest.sh@78 -- # rm -f 00:04:17.016 16:50:51 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:17.274 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:17.858 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:17.858 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:17.858 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:17.858 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:17.858 16:50:52 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:17.858 16:50:52 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:17.858 16:50:52 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:17.858 16:50:52 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:17.859 16:50:52 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:17.859 16:50:52 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:17.859 16:50:52 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:17.859 16:50:52 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:04:17.859 16:50:52 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:17.859 16:50:52 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:17.859 16:50:52 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:17.859 16:50:52 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:17.859 16:50:52 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:17.859 16:50:52 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:17.859 16:50:52 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:04:17.859 16:50:52 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:17.859 16:50:52 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:17.859 16:50:52 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:17.859 16:50:52 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:17.859 16:50:52 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:17.859 16:50:52 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:17.859 16:50:52 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:04:17.859 16:50:52 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:17.859 16:50:52 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:04:17.859 16:50:52 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:17.859 16:50:52 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:17.859 16:50:52 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:17.859 16:50:52 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:17.859 16:50:52 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:04:17.859 16:50:52 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:04:17.859 16:50:52 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:17.859 16:50:52 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:17.859 16:50:52 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:17.859 16:50:52 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:04:17.859 16:50:52 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:04:17.859 16:50:52 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:17.859 16:50:52 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:17.859 16:50:52 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:17.859 16:50:52 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:04:17.859 16:50:52 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:17.859 16:50:52 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:04:17.859 16:50:52 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:04:17.859 16:50:52 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:17.859 16:50:52 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:17.859 16:50:52 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:17.859 16:50:52 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:17.859 16:50:52 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:17.859 16:50:52 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:17.859 16:50:52 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:17.859 16:50:52 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:18.121 No valid GPT data, bailing 00:04:18.121 16:50:52 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:18.121 16:50:52 -- scripts/common.sh@394 -- # pt= 00:04:18.121 16:50:52 -- scripts/common.sh@395 -- # return 1 00:04:18.121 16:50:52 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:18.121 1+0 records in 00:04:18.121 1+0 records out 00:04:18.121 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0265705 s, 39.5 MB/s 00:04:18.121 16:50:52 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:18.121 16:50:52 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:18.121 16:50:52 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:18.121 16:50:52 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:18.121 16:50:52 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:18.121 No valid GPT data, bailing 00:04:18.121 16:50:52 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:18.121 16:50:52 -- scripts/common.sh@394 -- # pt= 00:04:18.121 16:50:52 -- scripts/common.sh@395 -- # return 1 00:04:18.121 16:50:52 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:18.121 1+0 records in 00:04:18.121 1+0 records out 00:04:18.121 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0058525 s, 179 MB/s 00:04:18.121 16:50:52 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:18.121 16:50:52 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:18.121 16:50:52 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:18.121 16:50:52 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:18.121 16:50:52 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:18.121 No valid GPT data, bailing 00:04:18.121 16:50:52 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:18.121 16:50:52 -- scripts/common.sh@394 -- # pt= 00:04:18.121 16:50:52 -- scripts/common.sh@395 -- # return 1 00:04:18.121 16:50:52 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:18.121 1+0 records in 00:04:18.121 1+0 records out 00:04:18.121 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00646907 s, 162 MB/s 00:04:18.121 16:50:52 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:18.121 16:50:52 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:18.121 16:50:52 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:18.121 16:50:52 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:18.121 16:50:52 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:18.382 No valid GPT data, bailing 00:04:18.382 16:50:52 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:18.382 16:50:52 -- scripts/common.sh@394 -- # pt= 00:04:18.382 16:50:52 -- scripts/common.sh@395 -- # return 1 00:04:18.382 16:50:52 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:18.382 1+0 records in 00:04:18.382 1+0 records out 00:04:18.382 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0070449 s, 149 MB/s 00:04:18.382 16:50:52 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:18.382 16:50:52 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:18.382 16:50:52 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:18.382 16:50:52 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:18.382 16:50:52 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:18.382 No valid GPT data, bailing 00:04:18.382 16:50:52 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:18.382 16:50:52 -- scripts/common.sh@394 -- # pt= 00:04:18.382 16:50:52 -- scripts/common.sh@395 -- # return 1 00:04:18.382 16:50:52 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:18.382 1+0 records in 00:04:18.382 1+0 records out 00:04:18.382 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00666457 s, 157 MB/s 00:04:18.382 16:50:52 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:18.382 16:50:52 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:18.382 16:50:52 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:18.382 16:50:52 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:18.382 16:50:52 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:18.382 No valid GPT data, bailing 00:04:18.382 16:50:52 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:18.382 16:50:52 -- scripts/common.sh@394 -- # pt= 00:04:18.382 16:50:52 -- scripts/common.sh@395 -- # return 1 00:04:18.382 16:50:52 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:18.382 1+0 records in 00:04:18.382 1+0 records out 00:04:18.382 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00712728 s, 147 MB/s 00:04:18.382 16:50:52 -- spdk/autotest.sh@105 -- # sync 00:04:18.952 16:50:53 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:18.952 16:50:53 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:18.953 16:50:53 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:20.878 16:50:54 -- spdk/autotest.sh@111 -- # uname -s 00:04:20.878 16:50:54 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:20.878 16:50:54 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:20.878 16:50:54 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:21.153 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:21.724 Hugepages 00:04:21.724 node hugesize free / total 00:04:21.724 node0 1048576kB 0 / 0 00:04:21.724 node0 2048kB 0 / 0 00:04:21.724 00:04:21.724 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:21.724 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:21.724 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:21.724 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:21.986 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:21.986 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:21.986 16:50:56 -- spdk/autotest.sh@117 -- # uname -s 00:04:21.986 16:50:56 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:21.986 16:50:56 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:21.986 16:50:56 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:22.246 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.818 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.818 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.818 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.818 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.818 16:50:57 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:24.206 16:50:58 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:24.206 16:50:58 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:24.206 16:50:58 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:24.206 16:50:58 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:24.206 16:50:58 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:24.206 16:50:58 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:24.206 16:50:58 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:24.206 16:50:58 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:24.206 16:50:58 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:24.206 16:50:58 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:24.206 16:50:58 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:24.206 16:50:58 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:24.206 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.466 Waiting for block devices as requested 00:04:24.466 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:24.466 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:24.466 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:24.726 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:29.992 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:29.992 16:51:03 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:29.992 16:51:03 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:29.992 16:51:03 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:29.992 16:51:03 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:29.992 16:51:03 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:29.992 16:51:03 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:29.992 16:51:03 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:29.992 16:51:03 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:29.992 16:51:03 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:29.992 16:51:03 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:29.992 16:51:03 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:29.992 16:51:03 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:29.992 16:51:03 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:29.992 16:51:03 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:29.992 16:51:03 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:29.992 16:51:03 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:29.992 16:51:03 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:29.992 16:51:03 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:29.992 16:51:03 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:29.992 16:51:03 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:29.992 16:51:03 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:29.992 16:51:03 -- common/autotest_common.sh@1543 -- # continue 00:04:29.992 16:51:03 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:29.992 16:51:03 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:29.992 16:51:03 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:29.992 16:51:03 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:29.992 16:51:03 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:29.992 16:51:03 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:29.992 16:51:03 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:29.992 16:51:03 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:29.992 16:51:03 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:29.992 16:51:03 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:29.992 16:51:03 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:29.992 16:51:03 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:29.992 16:51:03 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:29.992 16:51:03 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:29.992 16:51:03 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:29.992 16:51:03 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:29.992 16:51:03 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:29.992 16:51:03 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:29.992 16:51:03 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:29.992 16:51:03 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:29.992 16:51:03 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:29.992 16:51:03 -- common/autotest_common.sh@1543 -- # continue 00:04:29.992 16:51:03 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:29.992 16:51:03 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:29.992 16:51:03 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:29.992 16:51:03 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:29.992 16:51:03 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:29.992 16:51:03 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:29.992 16:51:04 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:29.992 16:51:04 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:29.992 16:51:04 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:29.992 16:51:04 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:29.992 16:51:04 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:29.992 16:51:04 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:29.992 16:51:04 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:29.992 16:51:04 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:29.992 16:51:04 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:29.992 16:51:04 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:29.992 16:51:04 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:29.992 16:51:04 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:29.992 16:51:04 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:29.992 16:51:04 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:29.992 16:51:04 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:29.992 16:51:04 -- common/autotest_common.sh@1543 -- # continue 00:04:29.992 16:51:04 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:29.992 16:51:04 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:29.992 16:51:04 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:29.992 16:51:04 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:29.992 16:51:04 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:29.992 16:51:04 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:29.992 16:51:04 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:29.992 16:51:04 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:29.992 16:51:04 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:29.992 16:51:04 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:29.992 16:51:04 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:29.992 16:51:04 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:29.992 16:51:04 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:29.992 16:51:04 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:29.992 16:51:04 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:29.992 16:51:04 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:29.992 16:51:04 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:29.992 16:51:04 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:29.992 16:51:04 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:29.992 16:51:04 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:29.992 16:51:04 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:29.992 16:51:04 -- common/autotest_common.sh@1543 -- # continue 00:04:29.992 16:51:04 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:29.992 16:51:04 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:29.992 16:51:04 -- common/autotest_common.sh@10 -- # set +x 00:04:29.992 16:51:04 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:29.992 16:51:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:29.992 16:51:04 -- common/autotest_common.sh@10 -- # set +x 00:04:29.992 16:51:04 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:30.250 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:30.891 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.891 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.891 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.891 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:30.891 16:51:05 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:30.891 16:51:05 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:30.891 16:51:05 -- common/autotest_common.sh@10 -- # set +x 00:04:31.148 16:51:05 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:31.148 16:51:05 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:31.148 16:51:05 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:31.148 16:51:05 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:31.148 16:51:05 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:31.148 16:51:05 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:31.148 16:51:05 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:31.148 16:51:05 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:31.148 16:51:05 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:31.148 16:51:05 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:31.148 16:51:05 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:31.148 16:51:05 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:31.148 16:51:05 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:31.148 16:51:05 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:31.148 16:51:05 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:31.148 16:51:05 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:31.148 16:51:05 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:31.148 16:51:05 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:31.148 16:51:05 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:31.148 16:51:05 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:31.148 16:51:05 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:31.148 16:51:05 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:31.148 16:51:05 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:31.148 16:51:05 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:31.148 16:51:05 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:31.148 16:51:05 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:31.148 16:51:05 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:31.148 16:51:05 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:31.148 16:51:05 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:31.148 16:51:05 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:31.148 16:51:05 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:31.148 16:51:05 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:31.148 16:51:05 -- common/autotest_common.sh@1572 -- # return 0 00:04:31.148 16:51:05 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:31.148 16:51:05 -- common/autotest_common.sh@1580 -- # return 0 00:04:31.148 16:51:05 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:31.148 16:51:05 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:31.148 16:51:05 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:31.148 16:51:05 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:31.148 16:51:05 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:31.148 16:51:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:31.148 16:51:05 -- common/autotest_common.sh@10 -- # set +x 00:04:31.148 16:51:05 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:31.148 16:51:05 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:31.148 16:51:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.148 16:51:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.148 16:51:05 -- common/autotest_common.sh@10 -- # set +x 00:04:31.148 ************************************ 00:04:31.148 START TEST env 00:04:31.148 ************************************ 00:04:31.149 16:51:05 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:31.149 * Looking for test storage... 00:04:31.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:31.149 16:51:05 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:31.149 16:51:05 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:31.149 16:51:05 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:31.408 16:51:05 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:31.408 16:51:05 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.408 16:51:05 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.408 16:51:05 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.408 16:51:05 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.408 16:51:05 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.408 16:51:05 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.408 16:51:05 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.408 16:51:05 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.408 16:51:05 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.408 16:51:05 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.408 16:51:05 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.408 16:51:05 env -- scripts/common.sh@344 -- # case "$op" in 00:04:31.408 16:51:05 env -- scripts/common.sh@345 -- # : 1 00:04:31.408 16:51:05 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.408 16:51:05 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.408 16:51:05 env -- scripts/common.sh@365 -- # decimal 1 00:04:31.408 16:51:05 env -- scripts/common.sh@353 -- # local d=1 00:04:31.408 16:51:05 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.408 16:51:05 env -- scripts/common.sh@355 -- # echo 1 00:04:31.408 16:51:05 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.408 16:51:05 env -- scripts/common.sh@366 -- # decimal 2 00:04:31.409 16:51:05 env -- scripts/common.sh@353 -- # local d=2 00:04:31.409 16:51:05 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.409 16:51:05 env -- scripts/common.sh@355 -- # echo 2 00:04:31.409 16:51:05 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.409 16:51:05 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.409 16:51:05 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.409 16:51:05 env -- scripts/common.sh@368 -- # return 0 00:04:31.409 16:51:05 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.409 16:51:05 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:31.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.409 --rc genhtml_branch_coverage=1 00:04:31.409 --rc genhtml_function_coverage=1 00:04:31.409 --rc genhtml_legend=1 00:04:31.409 --rc geninfo_all_blocks=1 00:04:31.409 --rc geninfo_unexecuted_blocks=1 00:04:31.409 00:04:31.409 ' 00:04:31.409 16:51:05 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:31.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.409 --rc genhtml_branch_coverage=1 00:04:31.409 --rc genhtml_function_coverage=1 00:04:31.409 --rc genhtml_legend=1 00:04:31.409 --rc geninfo_all_blocks=1 00:04:31.409 --rc geninfo_unexecuted_blocks=1 00:04:31.409 00:04:31.409 ' 00:04:31.409 16:51:05 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:31.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.409 --rc genhtml_branch_coverage=1 00:04:31.409 --rc genhtml_function_coverage=1 00:04:31.409 --rc genhtml_legend=1 00:04:31.409 --rc geninfo_all_blocks=1 00:04:31.409 --rc geninfo_unexecuted_blocks=1 00:04:31.409 00:04:31.409 ' 00:04:31.409 16:51:05 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:31.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.409 --rc genhtml_branch_coverage=1 00:04:31.409 --rc genhtml_function_coverage=1 00:04:31.409 --rc genhtml_legend=1 00:04:31.409 --rc geninfo_all_blocks=1 00:04:31.409 --rc geninfo_unexecuted_blocks=1 00:04:31.409 00:04:31.409 ' 00:04:31.409 16:51:05 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:31.409 16:51:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.409 16:51:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.409 16:51:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.409 ************************************ 00:04:31.409 START TEST env_memory 00:04:31.409 ************************************ 00:04:31.409 16:51:05 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:31.409 00:04:31.409 00:04:31.409 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.409 http://cunit.sourceforge.net/ 00:04:31.409 00:04:31.409 00:04:31.409 Suite: memory 00:04:31.409 Test: alloc and free memory map ...[2024-12-05 16:51:05.611026] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:31.409 passed 00:04:31.409 Test: mem map translation ...[2024-12-05 16:51:05.649694] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:31.409 [2024-12-05 16:51:05.649752] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:31.409 [2024-12-05 16:51:05.649812] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:31.409 [2024-12-05 16:51:05.649827] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:31.409 passed 00:04:31.409 Test: mem map registration ...[2024-12-05 16:51:05.717958] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:31.409 [2024-12-05 16:51:05.718008] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:31.409 passed 00:04:31.670 Test: mem map adjacent registrations ...passed 00:04:31.670 00:04:31.670 Run Summary: Type Total Ran Passed Failed Inactive 00:04:31.670 suites 1 1 n/a 0 0 00:04:31.670 tests 4 4 4 0 0 00:04:31.670 asserts 152 152 152 0 n/a 00:04:31.670 00:04:31.670 Elapsed time = 0.233 seconds 00:04:31.670 00:04:31.670 real 0m0.273s 00:04:31.670 user 0m0.239s 00:04:31.670 sys 0m0.026s 00:04:31.670 16:51:05 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:31.670 ************************************ 00:04:31.670 END TEST env_memory 00:04:31.670 ************************************ 00:04:31.670 16:51:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:31.670 16:51:05 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:31.670 16:51:05 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.670 16:51:05 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.670 16:51:05 env -- common/autotest_common.sh@10 -- # set +x 00:04:31.670 ************************************ 00:04:31.670 START TEST env_vtophys 00:04:31.670 ************************************ 00:04:31.670 16:51:05 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:31.670 EAL: lib.eal log level changed from notice to debug 00:04:31.670 EAL: Detected lcore 0 as core 0 on socket 0 00:04:31.670 EAL: Detected lcore 1 as core 0 on socket 0 00:04:31.670 EAL: Detected lcore 2 as core 0 on socket 0 00:04:31.670 EAL: Detected lcore 3 as core 0 on socket 0 00:04:31.670 EAL: Detected lcore 4 as core 0 on socket 0 00:04:31.670 EAL: Detected lcore 5 as core 0 on socket 0 00:04:31.670 EAL: Detected lcore 6 as core 0 on socket 0 00:04:31.670 EAL: Detected lcore 7 as core 0 on socket 0 00:04:31.670 EAL: Detected lcore 8 as core 0 on socket 0 00:04:31.670 EAL: Detected lcore 9 as core 0 on socket 0 00:04:31.670 EAL: Maximum logical cores by configuration: 128 00:04:31.670 EAL: Detected CPU lcores: 10 00:04:31.670 EAL: Detected NUMA nodes: 1 00:04:31.670 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:31.670 EAL: Detected shared linkage of DPDK 00:04:31.670 EAL: No shared files mode enabled, IPC will be disabled 00:04:31.670 EAL: Selected IOVA mode 'PA' 00:04:31.670 EAL: Probing VFIO support... 00:04:31.670 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:31.670 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:31.670 EAL: Ask a virtual area of 0x2e000 bytes 00:04:31.670 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:31.670 EAL: Setting up physically contiguous memory... 00:04:31.670 EAL: Setting maximum number of open files to 524288 00:04:31.670 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:31.670 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:31.670 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.670 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:31.670 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.670 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.670 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:31.670 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:31.670 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.671 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:31.671 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.671 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.671 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:31.671 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:31.671 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.671 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:31.671 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.671 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.671 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:31.671 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:31.671 EAL: Ask a virtual area of 0x61000 bytes 00:04:31.671 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:31.671 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:31.671 EAL: Ask a virtual area of 0x400000000 bytes 00:04:31.671 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:31.671 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:31.671 EAL: Hugepages will be freed exactly as allocated. 00:04:31.671 EAL: No shared files mode enabled, IPC is disabled 00:04:31.671 EAL: No shared files mode enabled, IPC is disabled 00:04:31.930 EAL: TSC frequency is ~2600000 KHz 00:04:31.930 EAL: Main lcore 0 is ready (tid=7f75848eba40;cpuset=[0]) 00:04:31.930 EAL: Trying to obtain current memory policy. 00:04:31.930 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:31.930 EAL: Restoring previous memory policy: 0 00:04:31.930 EAL: request: mp_malloc_sync 00:04:31.930 EAL: No shared files mode enabled, IPC is disabled 00:04:31.930 EAL: Heap on socket 0 was expanded by 2MB 00:04:31.930 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:31.930 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:31.930 EAL: Mem event callback 'spdk:(nil)' registered 00:04:31.930 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:31.930 00:04:31.930 00:04:31.930 CUnit - A unit testing framework for C - Version 2.1-3 00:04:31.930 http://cunit.sourceforge.net/ 00:04:31.930 00:04:31.930 00:04:31.930 Suite: components_suite 00:04:32.190 Test: vtophys_malloc_test ...passed 00:04:32.190 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:32.190 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.190 EAL: Restoring previous memory policy: 4 00:04:32.190 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.190 EAL: request: mp_malloc_sync 00:04:32.190 EAL: No shared files mode enabled, IPC is disabled 00:04:32.190 EAL: Heap on socket 0 was expanded by 4MB 00:04:32.190 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.190 EAL: request: mp_malloc_sync 00:04:32.190 EAL: No shared files mode enabled, IPC is disabled 00:04:32.190 EAL: Heap on socket 0 was shrunk by 4MB 00:04:32.190 EAL: Trying to obtain current memory policy. 00:04:32.190 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.190 EAL: Restoring previous memory policy: 4 00:04:32.190 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.190 EAL: request: mp_malloc_sync 00:04:32.190 EAL: No shared files mode enabled, IPC is disabled 00:04:32.190 EAL: Heap on socket 0 was expanded by 6MB 00:04:32.190 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.190 EAL: request: mp_malloc_sync 00:04:32.190 EAL: No shared files mode enabled, IPC is disabled 00:04:32.190 EAL: Heap on socket 0 was shrunk by 6MB 00:04:32.190 EAL: Trying to obtain current memory policy. 00:04:32.190 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.190 EAL: Restoring previous memory policy: 4 00:04:32.190 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.190 EAL: request: mp_malloc_sync 00:04:32.190 EAL: No shared files mode enabled, IPC is disabled 00:04:32.190 EAL: Heap on socket 0 was expanded by 10MB 00:04:32.190 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.190 EAL: request: mp_malloc_sync 00:04:32.190 EAL: No shared files mode enabled, IPC is disabled 00:04:32.190 EAL: Heap on socket 0 was shrunk by 10MB 00:04:32.190 EAL: Trying to obtain current memory policy. 00:04:32.190 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.190 EAL: Restoring previous memory policy: 4 00:04:32.190 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.190 EAL: request: mp_malloc_sync 00:04:32.190 EAL: No shared files mode enabled, IPC is disabled 00:04:32.190 EAL: Heap on socket 0 was expanded by 18MB 00:04:32.190 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.190 EAL: request: mp_malloc_sync 00:04:32.190 EAL: No shared files mode enabled, IPC is disabled 00:04:32.190 EAL: Heap on socket 0 was shrunk by 18MB 00:04:32.447 EAL: Trying to obtain current memory policy. 00:04:32.447 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.447 EAL: Restoring previous memory policy: 4 00:04:32.447 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.447 EAL: request: mp_malloc_sync 00:04:32.447 EAL: No shared files mode enabled, IPC is disabled 00:04:32.447 EAL: Heap on socket 0 was expanded by 34MB 00:04:32.447 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.447 EAL: request: mp_malloc_sync 00:04:32.447 EAL: No shared files mode enabled, IPC is disabled 00:04:32.447 EAL: Heap on socket 0 was shrunk by 34MB 00:04:32.447 EAL: Trying to obtain current memory policy. 00:04:32.447 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.447 EAL: Restoring previous memory policy: 4 00:04:32.447 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.447 EAL: request: mp_malloc_sync 00:04:32.447 EAL: No shared files mode enabled, IPC is disabled 00:04:32.447 EAL: Heap on socket 0 was expanded by 66MB 00:04:32.447 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.447 EAL: request: mp_malloc_sync 00:04:32.447 EAL: No shared files mode enabled, IPC is disabled 00:04:32.447 EAL: Heap on socket 0 was shrunk by 66MB 00:04:32.704 EAL: Trying to obtain current memory policy. 00:04:32.704 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.704 EAL: Restoring previous memory policy: 4 00:04:32.704 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.704 EAL: request: mp_malloc_sync 00:04:32.704 EAL: No shared files mode enabled, IPC is disabled 00:04:32.704 EAL: Heap on socket 0 was expanded by 130MB 00:04:32.704 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.704 EAL: request: mp_malloc_sync 00:04:32.704 EAL: No shared files mode enabled, IPC is disabled 00:04:32.704 EAL: Heap on socket 0 was shrunk by 130MB 00:04:32.960 EAL: Trying to obtain current memory policy. 00:04:32.960 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.960 EAL: Restoring previous memory policy: 4 00:04:32.960 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.960 EAL: request: mp_malloc_sync 00:04:32.960 EAL: No shared files mode enabled, IPC is disabled 00:04:32.960 EAL: Heap on socket 0 was expanded by 258MB 00:04:33.216 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.216 EAL: request: mp_malloc_sync 00:04:33.216 EAL: No shared files mode enabled, IPC is disabled 00:04:33.216 EAL: Heap on socket 0 was shrunk by 258MB 00:04:33.472 EAL: Trying to obtain current memory policy. 00:04:33.472 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.472 EAL: Restoring previous memory policy: 4 00:04:33.472 EAL: Calling mem event callback 'spdk:(nil)' 00:04:33.472 EAL: request: mp_malloc_sync 00:04:33.472 EAL: No shared files mode enabled, IPC is disabled 00:04:33.472 EAL: Heap on socket 0 was expanded by 514MB 00:04:34.042 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.301 EAL: request: mp_malloc_sync 00:04:34.301 EAL: No shared files mode enabled, IPC is disabled 00:04:34.301 EAL: Heap on socket 0 was shrunk by 514MB 00:04:34.870 EAL: Trying to obtain current memory policy. 00:04:34.870 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.870 EAL: Restoring previous memory policy: 4 00:04:34.870 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.870 EAL: request: mp_malloc_sync 00:04:34.870 EAL: No shared files mode enabled, IPC is disabled 00:04:34.870 EAL: Heap on socket 0 was expanded by 1026MB 00:04:35.803 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.803 EAL: request: mp_malloc_sync 00:04:35.803 EAL: No shared files mode enabled, IPC is disabled 00:04:35.803 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:36.737 passed 00:04:36.737 00:04:36.737 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.737 suites 1 1 n/a 0 0 00:04:36.737 tests 2 2 2 0 0 00:04:36.737 asserts 5852 5852 5852 0 n/a 00:04:36.737 00:04:36.737 Elapsed time = 4.725 seconds 00:04:36.737 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.737 EAL: request: mp_malloc_sync 00:04:36.737 EAL: No shared files mode enabled, IPC is disabled 00:04:36.737 EAL: Heap on socket 0 was shrunk by 2MB 00:04:36.737 EAL: No shared files mode enabled, IPC is disabled 00:04:36.737 EAL: No shared files mode enabled, IPC is disabled 00:04:36.737 EAL: No shared files mode enabled, IPC is disabled 00:04:36.737 00:04:36.737 real 0m5.014s 00:04:36.737 user 0m4.126s 00:04:36.737 sys 0m0.735s 00:04:36.737 16:51:10 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.737 ************************************ 00:04:36.737 END TEST env_vtophys 00:04:36.737 ************************************ 00:04:36.737 16:51:10 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:36.737 16:51:10 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:36.737 16:51:10 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.737 16:51:10 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.737 16:51:10 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.737 ************************************ 00:04:36.737 START TEST env_pci 00:04:36.737 ************************************ 00:04:36.737 16:51:10 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:36.737 00:04:36.737 00:04:36.737 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.737 http://cunit.sourceforge.net/ 00:04:36.737 00:04:36.737 00:04:36.737 Suite: pci 00:04:36.737 Test: pci_hook ...[2024-12-05 16:51:10.987853] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56968 has claimed it 00:04:36.737 passed 00:04:36.737 00:04:36.737 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.737 suites 1 1 n/a 0 0 00:04:36.737 tests 1 1 1 0 0 00:04:36.737 asserts 25 25 25 0 n/a 00:04:36.737 00:04:36.737 Elapsed time = 0.005 seconds 00:04:36.737 EAL: Cannot find device (10000:00:01.0) 00:04:36.737 EAL: Failed to attach device on primary process 00:04:36.737 00:04:36.737 real 0m0.061s 00:04:36.737 user 0m0.028s 00:04:36.737 sys 0m0.032s 00:04:36.737 16:51:11 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.737 ************************************ 00:04:36.737 END TEST env_pci 00:04:36.737 ************************************ 00:04:36.737 16:51:11 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:36.737 16:51:11 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:36.737 16:51:11 env -- env/env.sh@15 -- # uname 00:04:36.737 16:51:11 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:36.737 16:51:11 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:36.737 16:51:11 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:36.737 16:51:11 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:36.737 16:51:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.737 16:51:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.737 ************************************ 00:04:36.737 START TEST env_dpdk_post_init 00:04:36.737 ************************************ 00:04:36.737 16:51:11 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:36.998 EAL: Detected CPU lcores: 10 00:04:36.998 EAL: Detected NUMA nodes: 1 00:04:36.998 EAL: Detected shared linkage of DPDK 00:04:36.998 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:36.998 EAL: Selected IOVA mode 'PA' 00:04:36.998 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:36.998 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:36.998 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:36.998 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:36.998 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:36.998 Starting DPDK initialization... 00:04:36.998 Starting SPDK post initialization... 00:04:36.998 SPDK NVMe probe 00:04:36.998 Attaching to 0000:00:10.0 00:04:36.998 Attaching to 0000:00:11.0 00:04:36.998 Attaching to 0000:00:12.0 00:04:36.998 Attaching to 0000:00:13.0 00:04:36.998 Attached to 0000:00:10.0 00:04:36.998 Attached to 0000:00:11.0 00:04:36.998 Attached to 0000:00:13.0 00:04:36.998 Attached to 0000:00:12.0 00:04:36.998 Cleaning up... 00:04:36.998 00:04:36.998 real 0m0.231s 00:04:36.998 user 0m0.066s 00:04:36.998 sys 0m0.067s 00:04:36.998 16:51:11 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.998 ************************************ 00:04:36.998 END TEST env_dpdk_post_init 00:04:36.998 ************************************ 00:04:36.998 16:51:11 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:36.998 16:51:11 env -- env/env.sh@26 -- # uname 00:04:36.998 16:51:11 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:36.998 16:51:11 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:36.998 16:51:11 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.998 16:51:11 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.998 16:51:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.998 ************************************ 00:04:36.998 START TEST env_mem_callbacks 00:04:36.998 ************************************ 00:04:36.998 16:51:11 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:37.260 EAL: Detected CPU lcores: 10 00:04:37.260 EAL: Detected NUMA nodes: 1 00:04:37.260 EAL: Detected shared linkage of DPDK 00:04:37.260 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:37.260 EAL: Selected IOVA mode 'PA' 00:04:37.260 00:04:37.260 00:04:37.260 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.260 http://cunit.sourceforge.net/ 00:04:37.260 00:04:37.260 00:04:37.260 Suite: memory 00:04:37.260 Test: test ... 00:04:37.260 register 0x200000200000 2097152 00:04:37.260 malloc 3145728 00:04:37.260 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:37.260 register 0x200000400000 4194304 00:04:37.260 buf 0x2000004fffc0 len 3145728 PASSED 00:04:37.260 malloc 64 00:04:37.260 buf 0x2000004ffec0 len 64 PASSED 00:04:37.260 malloc 4194304 00:04:37.260 register 0x200000800000 6291456 00:04:37.260 buf 0x2000009fffc0 len 4194304 PASSED 00:04:37.260 free 0x2000004fffc0 3145728 00:04:37.260 free 0x2000004ffec0 64 00:04:37.260 unregister 0x200000400000 4194304 PASSED 00:04:37.260 free 0x2000009fffc0 4194304 00:04:37.260 unregister 0x200000800000 6291456 PASSED 00:04:37.260 malloc 8388608 00:04:37.260 register 0x200000400000 10485760 00:04:37.260 buf 0x2000005fffc0 len 8388608 PASSED 00:04:37.260 free 0x2000005fffc0 8388608 00:04:37.260 unregister 0x200000400000 10485760 PASSED 00:04:37.260 passed 00:04:37.260 00:04:37.260 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.260 suites 1 1 n/a 0 0 00:04:37.260 tests 1 1 1 0 0 00:04:37.260 asserts 15 15 15 0 n/a 00:04:37.260 00:04:37.260 Elapsed time = 0.040 seconds 00:04:37.260 00:04:37.260 real 0m0.209s 00:04:37.260 user 0m0.053s 00:04:37.260 sys 0m0.054s 00:04:37.260 16:51:11 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.260 16:51:11 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:37.260 ************************************ 00:04:37.260 END TEST env_mem_callbacks 00:04:37.260 ************************************ 00:04:37.260 00:04:37.260 real 0m6.216s 00:04:37.260 user 0m4.653s 00:04:37.260 sys 0m1.145s 00:04:37.260 16:51:11 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.260 16:51:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.260 ************************************ 00:04:37.260 END TEST env 00:04:37.260 ************************************ 00:04:37.519 16:51:11 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:37.519 16:51:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.519 16:51:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.519 16:51:11 -- common/autotest_common.sh@10 -- # set +x 00:04:37.519 ************************************ 00:04:37.519 START TEST rpc 00:04:37.519 ************************************ 00:04:37.519 16:51:11 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:37.519 * Looking for test storage... 00:04:37.519 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:37.519 16:51:11 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:37.519 16:51:11 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:37.519 16:51:11 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:37.519 16:51:11 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:37.519 16:51:11 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.519 16:51:11 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.519 16:51:11 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.519 16:51:11 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.519 16:51:11 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.519 16:51:11 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.519 16:51:11 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.519 16:51:11 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.519 16:51:11 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.519 16:51:11 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.519 16:51:11 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.519 16:51:11 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:37.519 16:51:11 rpc -- scripts/common.sh@345 -- # : 1 00:04:37.519 16:51:11 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.519 16:51:11 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.519 16:51:11 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:37.519 16:51:11 rpc -- scripts/common.sh@353 -- # local d=1 00:04:37.519 16:51:11 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.519 16:51:11 rpc -- scripts/common.sh@355 -- # echo 1 00:04:37.519 16:51:11 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.519 16:51:11 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:37.519 16:51:11 rpc -- scripts/common.sh@353 -- # local d=2 00:04:37.519 16:51:11 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.519 16:51:11 rpc -- scripts/common.sh@355 -- # echo 2 00:04:37.519 16:51:11 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.519 16:51:11 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.519 16:51:11 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.519 16:51:11 rpc -- scripts/common.sh@368 -- # return 0 00:04:37.519 16:51:11 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.520 16:51:11 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:37.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.520 --rc genhtml_branch_coverage=1 00:04:37.520 --rc genhtml_function_coverage=1 00:04:37.520 --rc genhtml_legend=1 00:04:37.520 --rc geninfo_all_blocks=1 00:04:37.520 --rc geninfo_unexecuted_blocks=1 00:04:37.520 00:04:37.520 ' 00:04:37.520 16:51:11 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:37.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.520 --rc genhtml_branch_coverage=1 00:04:37.520 --rc genhtml_function_coverage=1 00:04:37.520 --rc genhtml_legend=1 00:04:37.520 --rc geninfo_all_blocks=1 00:04:37.520 --rc geninfo_unexecuted_blocks=1 00:04:37.520 00:04:37.520 ' 00:04:37.520 16:51:11 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:37.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.520 --rc genhtml_branch_coverage=1 00:04:37.520 --rc genhtml_function_coverage=1 00:04:37.520 --rc genhtml_legend=1 00:04:37.520 --rc geninfo_all_blocks=1 00:04:37.520 --rc geninfo_unexecuted_blocks=1 00:04:37.520 00:04:37.520 ' 00:04:37.520 16:51:11 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:37.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.520 --rc genhtml_branch_coverage=1 00:04:37.520 --rc genhtml_function_coverage=1 00:04:37.520 --rc genhtml_legend=1 00:04:37.520 --rc geninfo_all_blocks=1 00:04:37.520 --rc geninfo_unexecuted_blocks=1 00:04:37.520 00:04:37.520 ' 00:04:37.520 16:51:11 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57090 00:04:37.520 16:51:11 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.520 16:51:11 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57090 00:04:37.520 16:51:11 rpc -- common/autotest_common.sh@835 -- # '[' -z 57090 ']' 00:04:37.520 16:51:11 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.520 16:51:11 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.520 16:51:11 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:37.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.520 16:51:11 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.520 16:51:11 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.520 16:51:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.520 [2024-12-05 16:51:11.839025] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:37.520 [2024-12-05 16:51:11.839143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57090 ] 00:04:37.778 [2024-12-05 16:51:11.987445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.778 [2024-12-05 16:51:12.066988] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:37.778 [2024-12-05 16:51:12.067041] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57090' to capture a snapshot of events at runtime. 00:04:37.779 [2024-12-05 16:51:12.067049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:37.779 [2024-12-05 16:51:12.067056] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:37.779 [2024-12-05 16:51:12.067062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57090 for offline analysis/debug. 00:04:37.779 [2024-12-05 16:51:12.067730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.345 16:51:12 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:38.345 16:51:12 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:38.345 16:51:12 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:38.345 16:51:12 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:38.345 16:51:12 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:38.345 16:51:12 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:38.345 16:51:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.345 16:51:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.345 16:51:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.345 ************************************ 00:04:38.345 START TEST rpc_integrity 00:04:38.345 ************************************ 00:04:38.345 16:51:12 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:38.345 16:51:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:38.345 16:51:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.346 16:51:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.346 16:51:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.346 16:51:12 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:38.346 16:51:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:38.604 16:51:12 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:38.604 16:51:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:38.604 16:51:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.604 16:51:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.604 16:51:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.604 16:51:12 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:38.604 16:51:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:38.604 16:51:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.604 16:51:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.604 16:51:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.604 16:51:12 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:38.604 { 00:04:38.604 "name": "Malloc0", 00:04:38.604 "aliases": [ 00:04:38.604 "19348cdc-8924-4fbc-a185-d9c706dfbe63" 00:04:38.604 ], 00:04:38.604 "product_name": "Malloc disk", 00:04:38.604 "block_size": 512, 00:04:38.604 "num_blocks": 16384, 00:04:38.604 "uuid": "19348cdc-8924-4fbc-a185-d9c706dfbe63", 00:04:38.604 "assigned_rate_limits": { 00:04:38.604 "rw_ios_per_sec": 0, 00:04:38.604 "rw_mbytes_per_sec": 0, 00:04:38.604 "r_mbytes_per_sec": 0, 00:04:38.604 "w_mbytes_per_sec": 0 00:04:38.604 }, 00:04:38.604 "claimed": false, 00:04:38.604 "zoned": false, 00:04:38.604 "supported_io_types": { 00:04:38.604 "read": true, 00:04:38.604 "write": true, 00:04:38.604 "unmap": true, 00:04:38.604 "flush": true, 00:04:38.604 "reset": true, 00:04:38.604 "nvme_admin": false, 00:04:38.604 "nvme_io": false, 00:04:38.604 "nvme_io_md": false, 00:04:38.604 "write_zeroes": true, 00:04:38.604 "zcopy": true, 00:04:38.604 "get_zone_info": false, 00:04:38.604 "zone_management": false, 00:04:38.604 "zone_append": false, 00:04:38.604 "compare": false, 00:04:38.604 "compare_and_write": false, 00:04:38.604 "abort": true, 00:04:38.604 "seek_hole": false, 00:04:38.604 "seek_data": false, 00:04:38.604 "copy": true, 00:04:38.604 "nvme_iov_md": false 00:04:38.604 }, 00:04:38.604 "memory_domains": [ 00:04:38.604 { 00:04:38.604 "dma_device_id": "system", 00:04:38.604 "dma_device_type": 1 00:04:38.604 }, 00:04:38.604 { 00:04:38.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.604 "dma_device_type": 2 00:04:38.604 } 00:04:38.604 ], 00:04:38.604 "driver_specific": {} 00:04:38.604 } 00:04:38.604 ]' 00:04:38.604 16:51:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:38.604 16:51:12 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:38.604 16:51:12 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:38.604 16:51:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.604 16:51:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.604 [2024-12-05 16:51:12.797696] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:38.604 [2024-12-05 16:51:12.797751] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:38.604 [2024-12-05 16:51:12.797770] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:38.604 [2024-12-05 16:51:12.797779] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:38.604 [2024-12-05 16:51:12.799527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:38.604 [2024-12-05 16:51:12.799566] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:38.604 Passthru0 00:04:38.604 16:51:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.604 16:51:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:38.604 16:51:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.604 16:51:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.604 16:51:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.604 16:51:12 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:38.604 { 00:04:38.604 "name": "Malloc0", 00:04:38.604 "aliases": [ 00:04:38.604 "19348cdc-8924-4fbc-a185-d9c706dfbe63" 00:04:38.604 ], 00:04:38.604 "product_name": "Malloc disk", 00:04:38.604 "block_size": 512, 00:04:38.604 "num_blocks": 16384, 00:04:38.604 "uuid": "19348cdc-8924-4fbc-a185-d9c706dfbe63", 00:04:38.604 "assigned_rate_limits": { 00:04:38.604 "rw_ios_per_sec": 0, 00:04:38.604 "rw_mbytes_per_sec": 0, 00:04:38.604 "r_mbytes_per_sec": 0, 00:04:38.604 "w_mbytes_per_sec": 0 00:04:38.604 }, 00:04:38.604 "claimed": true, 00:04:38.604 "claim_type": "exclusive_write", 00:04:38.604 "zoned": false, 00:04:38.604 "supported_io_types": { 00:04:38.604 "read": true, 00:04:38.604 "write": true, 00:04:38.604 "unmap": true, 00:04:38.604 "flush": true, 00:04:38.604 "reset": true, 00:04:38.604 "nvme_admin": false, 00:04:38.604 "nvme_io": false, 00:04:38.604 "nvme_io_md": false, 00:04:38.604 "write_zeroes": true, 00:04:38.604 "zcopy": true, 00:04:38.604 "get_zone_info": false, 00:04:38.604 "zone_management": false, 00:04:38.604 "zone_append": false, 00:04:38.605 "compare": false, 00:04:38.605 "compare_and_write": false, 00:04:38.605 "abort": true, 00:04:38.605 "seek_hole": false, 00:04:38.605 "seek_data": false, 00:04:38.605 "copy": true, 00:04:38.605 "nvme_iov_md": false 00:04:38.605 }, 00:04:38.605 "memory_domains": [ 00:04:38.605 { 00:04:38.605 "dma_device_id": "system", 00:04:38.605 "dma_device_type": 1 00:04:38.605 }, 00:04:38.605 { 00:04:38.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.605 "dma_device_type": 2 00:04:38.605 } 00:04:38.605 ], 00:04:38.605 "driver_specific": {} 00:04:38.605 }, 00:04:38.605 { 00:04:38.605 "name": "Passthru0", 00:04:38.605 "aliases": [ 00:04:38.605 "558fb5de-ae53-52e7-9cb4-145b3e7e437a" 00:04:38.605 ], 00:04:38.605 "product_name": "passthru", 00:04:38.605 "block_size": 512, 00:04:38.605 "num_blocks": 16384, 00:04:38.605 "uuid": "558fb5de-ae53-52e7-9cb4-145b3e7e437a", 00:04:38.605 "assigned_rate_limits": { 00:04:38.605 "rw_ios_per_sec": 0, 00:04:38.605 "rw_mbytes_per_sec": 0, 00:04:38.605 "r_mbytes_per_sec": 0, 00:04:38.605 "w_mbytes_per_sec": 0 00:04:38.605 }, 00:04:38.605 "claimed": false, 00:04:38.605 "zoned": false, 00:04:38.605 "supported_io_types": { 00:04:38.605 "read": true, 00:04:38.605 "write": true, 00:04:38.605 "unmap": true, 00:04:38.605 "flush": true, 00:04:38.605 "reset": true, 00:04:38.605 "nvme_admin": false, 00:04:38.605 "nvme_io": false, 00:04:38.605 "nvme_io_md": false, 00:04:38.605 "write_zeroes": true, 00:04:38.605 "zcopy": true, 00:04:38.605 "get_zone_info": false, 00:04:38.605 "zone_management": false, 00:04:38.605 "zone_append": false, 00:04:38.605 "compare": false, 00:04:38.605 "compare_and_write": false, 00:04:38.605 "abort": true, 00:04:38.605 "seek_hole": false, 00:04:38.605 "seek_data": false, 00:04:38.605 "copy": true, 00:04:38.605 "nvme_iov_md": false 00:04:38.605 }, 00:04:38.605 "memory_domains": [ 00:04:38.605 { 00:04:38.605 "dma_device_id": "system", 00:04:38.605 "dma_device_type": 1 00:04:38.605 }, 00:04:38.605 { 00:04:38.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.605 "dma_device_type": 2 00:04:38.605 } 00:04:38.605 ], 00:04:38.605 "driver_specific": { 00:04:38.605 "passthru": { 00:04:38.605 "name": "Passthru0", 00:04:38.605 "base_bdev_name": "Malloc0" 00:04:38.605 } 00:04:38.605 } 00:04:38.605 } 00:04:38.605 ]' 00:04:38.605 16:51:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:38.605 16:51:12 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:38.605 16:51:12 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:38.605 16:51:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.605 16:51:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.605 16:51:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.605 16:51:12 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:38.605 16:51:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.605 16:51:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.605 16:51:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.605 16:51:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:38.605 16:51:12 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.605 16:51:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.605 16:51:12 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.605 16:51:12 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:38.605 16:51:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:38.605 16:51:12 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:38.605 00:04:38.605 real 0m0.243s 00:04:38.605 user 0m0.140s 00:04:38.605 sys 0m0.028s 00:04:38.605 16:51:12 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.605 16:51:12 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:38.605 ************************************ 00:04:38.605 END TEST rpc_integrity 00:04:38.605 ************************************ 00:04:38.605 16:51:12 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:38.605 16:51:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.605 16:51:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.605 16:51:12 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.605 ************************************ 00:04:38.605 START TEST rpc_plugins 00:04:38.605 ************************************ 00:04:38.605 16:51:12 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:38.605 16:51:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:38.605 16:51:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.605 16:51:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.865 16:51:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.865 16:51:12 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:38.865 16:51:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:38.865 16:51:12 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.865 16:51:12 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.865 16:51:12 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.865 16:51:12 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:38.865 { 00:04:38.865 "name": "Malloc1", 00:04:38.865 "aliases": [ 00:04:38.865 "f87471c0-ea1d-4b1b-a202-2933eb764d1b" 00:04:38.865 ], 00:04:38.865 "product_name": "Malloc disk", 00:04:38.865 "block_size": 4096, 00:04:38.865 "num_blocks": 256, 00:04:38.865 "uuid": "f87471c0-ea1d-4b1b-a202-2933eb764d1b", 00:04:38.865 "assigned_rate_limits": { 00:04:38.865 "rw_ios_per_sec": 0, 00:04:38.865 "rw_mbytes_per_sec": 0, 00:04:38.865 "r_mbytes_per_sec": 0, 00:04:38.865 "w_mbytes_per_sec": 0 00:04:38.865 }, 00:04:38.865 "claimed": false, 00:04:38.865 "zoned": false, 00:04:38.865 "supported_io_types": { 00:04:38.865 "read": true, 00:04:38.865 "write": true, 00:04:38.865 "unmap": true, 00:04:38.865 "flush": true, 00:04:38.865 "reset": true, 00:04:38.865 "nvme_admin": false, 00:04:38.865 "nvme_io": false, 00:04:38.865 "nvme_io_md": false, 00:04:38.865 "write_zeroes": true, 00:04:38.865 "zcopy": true, 00:04:38.865 "get_zone_info": false, 00:04:38.865 "zone_management": false, 00:04:38.865 "zone_append": false, 00:04:38.865 "compare": false, 00:04:38.865 "compare_and_write": false, 00:04:38.865 "abort": true, 00:04:38.865 "seek_hole": false, 00:04:38.865 "seek_data": false, 00:04:38.865 "copy": true, 00:04:38.865 "nvme_iov_md": false 00:04:38.865 }, 00:04:38.865 "memory_domains": [ 00:04:38.865 { 00:04:38.865 "dma_device_id": "system", 00:04:38.865 "dma_device_type": 1 00:04:38.865 }, 00:04:38.865 { 00:04:38.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:38.865 "dma_device_type": 2 00:04:38.865 } 00:04:38.865 ], 00:04:38.865 "driver_specific": {} 00:04:38.865 } 00:04:38.865 ]' 00:04:38.865 16:51:12 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:38.865 16:51:13 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:38.865 16:51:13 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:38.865 16:51:13 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.865 16:51:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.865 16:51:13 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.865 16:51:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:38.865 16:51:13 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.865 16:51:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.865 16:51:13 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.865 16:51:13 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:38.865 16:51:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:38.865 16:51:13 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:38.865 00:04:38.865 real 0m0.111s 00:04:38.865 user 0m0.069s 00:04:38.865 sys 0m0.013s 00:04:38.865 16:51:13 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.865 16:51:13 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:38.865 ************************************ 00:04:38.865 END TEST rpc_plugins 00:04:38.865 ************************************ 00:04:38.865 16:51:13 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:38.865 16:51:13 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:38.865 16:51:13 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.865 16:51:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.865 ************************************ 00:04:38.865 START TEST rpc_trace_cmd_test 00:04:38.865 ************************************ 00:04:38.865 16:51:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:38.865 16:51:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:38.865 16:51:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:38.865 16:51:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:38.865 16:51:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:38.865 16:51:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:38.865 16:51:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:38.865 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57090", 00:04:38.865 "tpoint_group_mask": "0x8", 00:04:38.865 "iscsi_conn": { 00:04:38.865 "mask": "0x2", 00:04:38.865 "tpoint_mask": "0x0" 00:04:38.865 }, 00:04:38.865 "scsi": { 00:04:38.865 "mask": "0x4", 00:04:38.865 "tpoint_mask": "0x0" 00:04:38.865 }, 00:04:38.865 "bdev": { 00:04:38.865 "mask": "0x8", 00:04:38.865 "tpoint_mask": "0xffffffffffffffff" 00:04:38.865 }, 00:04:38.865 "nvmf_rdma": { 00:04:38.865 "mask": "0x10", 00:04:38.865 "tpoint_mask": "0x0" 00:04:38.865 }, 00:04:38.865 "nvmf_tcp": { 00:04:38.865 "mask": "0x20", 00:04:38.865 "tpoint_mask": "0x0" 00:04:38.865 }, 00:04:38.865 "ftl": { 00:04:38.865 "mask": "0x40", 00:04:38.865 "tpoint_mask": "0x0" 00:04:38.865 }, 00:04:38.865 "blobfs": { 00:04:38.865 "mask": "0x80", 00:04:38.865 "tpoint_mask": "0x0" 00:04:38.865 }, 00:04:38.865 "dsa": { 00:04:38.865 "mask": "0x200", 00:04:38.865 "tpoint_mask": "0x0" 00:04:38.865 }, 00:04:38.865 "thread": { 00:04:38.865 "mask": "0x400", 00:04:38.865 "tpoint_mask": "0x0" 00:04:38.865 }, 00:04:38.865 "nvme_pcie": { 00:04:38.865 "mask": "0x800", 00:04:38.865 "tpoint_mask": "0x0" 00:04:38.865 }, 00:04:38.865 "iaa": { 00:04:38.865 "mask": "0x1000", 00:04:38.865 "tpoint_mask": "0x0" 00:04:38.865 }, 00:04:38.865 "nvme_tcp": { 00:04:38.865 "mask": "0x2000", 00:04:38.865 "tpoint_mask": "0x0" 00:04:38.865 }, 00:04:38.865 "bdev_nvme": { 00:04:38.865 "mask": "0x4000", 00:04:38.865 "tpoint_mask": "0x0" 00:04:38.865 }, 00:04:38.865 "sock": { 00:04:38.865 "mask": "0x8000", 00:04:38.865 "tpoint_mask": "0x0" 00:04:38.865 }, 00:04:38.865 "blob": { 00:04:38.865 "mask": "0x10000", 00:04:38.865 "tpoint_mask": "0x0" 00:04:38.865 }, 00:04:38.865 "bdev_raid": { 00:04:38.865 "mask": "0x20000", 00:04:38.865 "tpoint_mask": "0x0" 00:04:38.865 }, 00:04:38.865 "scheduler": { 00:04:38.865 "mask": "0x40000", 00:04:38.865 "tpoint_mask": "0x0" 00:04:38.865 } 00:04:38.865 }' 00:04:38.865 16:51:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:38.865 16:51:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:38.865 16:51:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:38.865 16:51:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:38.865 16:51:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:39.124 16:51:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:39.124 16:51:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:39.124 16:51:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:39.124 16:51:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:39.124 16:51:13 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:39.124 00:04:39.124 real 0m0.188s 00:04:39.124 user 0m0.159s 00:04:39.124 sys 0m0.022s 00:04:39.124 16:51:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.124 16:51:13 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:39.124 ************************************ 00:04:39.124 END TEST rpc_trace_cmd_test 00:04:39.124 ************************************ 00:04:39.124 16:51:13 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:39.124 16:51:13 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:39.124 16:51:13 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:39.124 16:51:13 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.124 16:51:13 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.124 16:51:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.124 ************************************ 00:04:39.124 START TEST rpc_daemon_integrity 00:04:39.124 ************************************ 00:04:39.124 16:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:39.124 16:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:39.124 16:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.124 16:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.124 16:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.124 16:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:39.124 16:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:39.124 16:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:39.124 16:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:39.124 16:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.124 16:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.124 16:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.124 16:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:39.124 16:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:39.124 16:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.124 16:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.124 16:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.124 16:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:39.124 { 00:04:39.124 "name": "Malloc2", 00:04:39.124 "aliases": [ 00:04:39.124 "5333141c-3447-48a2-a138-05c99f158f19" 00:04:39.124 ], 00:04:39.124 "product_name": "Malloc disk", 00:04:39.124 "block_size": 512, 00:04:39.124 "num_blocks": 16384, 00:04:39.124 "uuid": "5333141c-3447-48a2-a138-05c99f158f19", 00:04:39.124 "assigned_rate_limits": { 00:04:39.124 "rw_ios_per_sec": 0, 00:04:39.124 "rw_mbytes_per_sec": 0, 00:04:39.124 "r_mbytes_per_sec": 0, 00:04:39.124 "w_mbytes_per_sec": 0 00:04:39.124 }, 00:04:39.124 "claimed": false, 00:04:39.124 "zoned": false, 00:04:39.124 "supported_io_types": { 00:04:39.125 "read": true, 00:04:39.125 "write": true, 00:04:39.125 "unmap": true, 00:04:39.125 "flush": true, 00:04:39.125 "reset": true, 00:04:39.125 "nvme_admin": false, 00:04:39.125 "nvme_io": false, 00:04:39.125 "nvme_io_md": false, 00:04:39.125 "write_zeroes": true, 00:04:39.125 "zcopy": true, 00:04:39.125 "get_zone_info": false, 00:04:39.125 "zone_management": false, 00:04:39.125 "zone_append": false, 00:04:39.125 "compare": false, 00:04:39.125 "compare_and_write": false, 00:04:39.125 "abort": true, 00:04:39.125 "seek_hole": false, 00:04:39.125 "seek_data": false, 00:04:39.125 "copy": true, 00:04:39.125 "nvme_iov_md": false 00:04:39.125 }, 00:04:39.125 "memory_domains": [ 00:04:39.125 { 00:04:39.125 "dma_device_id": "system", 00:04:39.125 "dma_device_type": 1 00:04:39.125 }, 00:04:39.125 { 00:04:39.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.125 "dma_device_type": 2 00:04:39.125 } 00:04:39.125 ], 00:04:39.125 "driver_specific": {} 00:04:39.125 } 00:04:39.125 ]' 00:04:39.125 16:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:39.125 16:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:39.125 16:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:39.125 16:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.125 16:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.125 [2024-12-05 16:51:13.450281] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:39.125 [2024-12-05 16:51:13.450332] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:39.125 [2024-12-05 16:51:13.450349] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:39.125 [2024-12-05 16:51:13.450358] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:39.125 [2024-12-05 16:51:13.452098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:39.125 [2024-12-05 16:51:13.452132] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:39.125 Passthru0 00:04:39.125 16:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.125 16:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:39.125 16:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.125 16:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.125 16:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.125 16:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:39.125 { 00:04:39.125 "name": "Malloc2", 00:04:39.125 "aliases": [ 00:04:39.125 "5333141c-3447-48a2-a138-05c99f158f19" 00:04:39.125 ], 00:04:39.125 "product_name": "Malloc disk", 00:04:39.125 "block_size": 512, 00:04:39.125 "num_blocks": 16384, 00:04:39.125 "uuid": "5333141c-3447-48a2-a138-05c99f158f19", 00:04:39.125 "assigned_rate_limits": { 00:04:39.125 "rw_ios_per_sec": 0, 00:04:39.125 "rw_mbytes_per_sec": 0, 00:04:39.125 "r_mbytes_per_sec": 0, 00:04:39.125 "w_mbytes_per_sec": 0 00:04:39.125 }, 00:04:39.125 "claimed": true, 00:04:39.125 "claim_type": "exclusive_write", 00:04:39.125 "zoned": false, 00:04:39.125 "supported_io_types": { 00:04:39.125 "read": true, 00:04:39.125 "write": true, 00:04:39.125 "unmap": true, 00:04:39.125 "flush": true, 00:04:39.125 "reset": true, 00:04:39.125 "nvme_admin": false, 00:04:39.125 "nvme_io": false, 00:04:39.125 "nvme_io_md": false, 00:04:39.125 "write_zeroes": true, 00:04:39.125 "zcopy": true, 00:04:39.125 "get_zone_info": false, 00:04:39.125 "zone_management": false, 00:04:39.125 "zone_append": false, 00:04:39.125 "compare": false, 00:04:39.125 "compare_and_write": false, 00:04:39.125 "abort": true, 00:04:39.125 "seek_hole": false, 00:04:39.125 "seek_data": false, 00:04:39.125 "copy": true, 00:04:39.125 "nvme_iov_md": false 00:04:39.125 }, 00:04:39.125 "memory_domains": [ 00:04:39.125 { 00:04:39.125 "dma_device_id": "system", 00:04:39.125 "dma_device_type": 1 00:04:39.125 }, 00:04:39.125 { 00:04:39.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.125 "dma_device_type": 2 00:04:39.125 } 00:04:39.125 ], 00:04:39.125 "driver_specific": {} 00:04:39.125 }, 00:04:39.125 { 00:04:39.125 "name": "Passthru0", 00:04:39.125 "aliases": [ 00:04:39.125 "a1d9bbe1-7320-591d-81ab-e824f118059a" 00:04:39.125 ], 00:04:39.125 "product_name": "passthru", 00:04:39.125 "block_size": 512, 00:04:39.125 "num_blocks": 16384, 00:04:39.125 "uuid": "a1d9bbe1-7320-591d-81ab-e824f118059a", 00:04:39.125 "assigned_rate_limits": { 00:04:39.125 "rw_ios_per_sec": 0, 00:04:39.125 "rw_mbytes_per_sec": 0, 00:04:39.125 "r_mbytes_per_sec": 0, 00:04:39.125 "w_mbytes_per_sec": 0 00:04:39.125 }, 00:04:39.125 "claimed": false, 00:04:39.125 "zoned": false, 00:04:39.125 "supported_io_types": { 00:04:39.125 "read": true, 00:04:39.125 "write": true, 00:04:39.125 "unmap": true, 00:04:39.125 "flush": true, 00:04:39.125 "reset": true, 00:04:39.125 "nvme_admin": false, 00:04:39.125 "nvme_io": false, 00:04:39.125 "nvme_io_md": false, 00:04:39.125 "write_zeroes": true, 00:04:39.125 "zcopy": true, 00:04:39.125 "get_zone_info": false, 00:04:39.125 "zone_management": false, 00:04:39.125 "zone_append": false, 00:04:39.125 "compare": false, 00:04:39.125 "compare_and_write": false, 00:04:39.125 "abort": true, 00:04:39.125 "seek_hole": false, 00:04:39.125 "seek_data": false, 00:04:39.125 "copy": true, 00:04:39.125 "nvme_iov_md": false 00:04:39.125 }, 00:04:39.125 "memory_domains": [ 00:04:39.125 { 00:04:39.125 "dma_device_id": "system", 00:04:39.125 "dma_device_type": 1 00:04:39.125 }, 00:04:39.125 { 00:04:39.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.125 "dma_device_type": 2 00:04:39.125 } 00:04:39.125 ], 00:04:39.125 "driver_specific": { 00:04:39.125 "passthru": { 00:04:39.125 "name": "Passthru0", 00:04:39.125 "base_bdev_name": "Malloc2" 00:04:39.125 } 00:04:39.125 } 00:04:39.125 } 00:04:39.125 ]' 00:04:39.125 16:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:39.416 16:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:39.416 16:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:39.416 16:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.416 16:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.416 16:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.416 16:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:39.416 16:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.416 16:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.416 16:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.416 16:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:39.416 16:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.416 16:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.416 16:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.416 16:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:39.416 16:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:39.416 16:51:13 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:39.416 00:04:39.416 real 0m0.244s 00:04:39.416 user 0m0.132s 00:04:39.416 sys 0m0.033s 00:04:39.416 16:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.416 ************************************ 00:04:39.416 END TEST rpc_daemon_integrity 00:04:39.416 ************************************ 00:04:39.416 16:51:13 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.416 16:51:13 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:39.416 16:51:13 rpc -- rpc/rpc.sh@84 -- # killprocess 57090 00:04:39.416 16:51:13 rpc -- common/autotest_common.sh@954 -- # '[' -z 57090 ']' 00:04:39.416 16:51:13 rpc -- common/autotest_common.sh@958 -- # kill -0 57090 00:04:39.416 16:51:13 rpc -- common/autotest_common.sh@959 -- # uname 00:04:39.416 16:51:13 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:39.416 16:51:13 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57090 00:04:39.416 16:51:13 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:39.416 16:51:13 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:39.416 killing process with pid 57090 00:04:39.416 16:51:13 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57090' 00:04:39.416 16:51:13 rpc -- common/autotest_common.sh@973 -- # kill 57090 00:04:39.416 16:51:13 rpc -- common/autotest_common.sh@978 -- # wait 57090 00:04:40.791 00:04:40.791 real 0m3.193s 00:04:40.791 user 0m3.652s 00:04:40.791 sys 0m0.593s 00:04:40.791 16:51:14 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.791 16:51:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.791 ************************************ 00:04:40.791 END TEST rpc 00:04:40.791 ************************************ 00:04:40.791 16:51:14 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:40.791 16:51:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.791 16:51:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.791 16:51:14 -- common/autotest_common.sh@10 -- # set +x 00:04:40.791 ************************************ 00:04:40.791 START TEST skip_rpc 00:04:40.791 ************************************ 00:04:40.791 16:51:14 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:40.791 * Looking for test storage... 00:04:40.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:40.791 16:51:14 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:40.791 16:51:14 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:40.791 16:51:14 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:40.791 16:51:14 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:40.791 16:51:14 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.791 16:51:14 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.791 16:51:14 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.791 16:51:14 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.791 16:51:14 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.791 16:51:14 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.791 16:51:14 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.791 16:51:14 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.791 16:51:14 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.791 16:51:14 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.791 16:51:14 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.791 16:51:14 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:40.791 16:51:14 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:40.791 16:51:14 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.791 16:51:14 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.791 16:51:14 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:40.791 16:51:15 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:40.791 16:51:15 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.791 16:51:15 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:40.791 16:51:15 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.791 16:51:15 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:40.791 16:51:15 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:40.791 16:51:15 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.791 16:51:15 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:40.791 16:51:15 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.791 16:51:15 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.791 16:51:15 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.791 16:51:15 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:40.791 16:51:15 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.791 16:51:15 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:40.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.791 --rc genhtml_branch_coverage=1 00:04:40.791 --rc genhtml_function_coverage=1 00:04:40.791 --rc genhtml_legend=1 00:04:40.791 --rc geninfo_all_blocks=1 00:04:40.791 --rc geninfo_unexecuted_blocks=1 00:04:40.791 00:04:40.791 ' 00:04:40.791 16:51:15 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:40.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.791 --rc genhtml_branch_coverage=1 00:04:40.791 --rc genhtml_function_coverage=1 00:04:40.791 --rc genhtml_legend=1 00:04:40.791 --rc geninfo_all_blocks=1 00:04:40.791 --rc geninfo_unexecuted_blocks=1 00:04:40.791 00:04:40.791 ' 00:04:40.791 16:51:15 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:40.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.791 --rc genhtml_branch_coverage=1 00:04:40.791 --rc genhtml_function_coverage=1 00:04:40.791 --rc genhtml_legend=1 00:04:40.791 --rc geninfo_all_blocks=1 00:04:40.791 --rc geninfo_unexecuted_blocks=1 00:04:40.791 00:04:40.791 ' 00:04:40.791 16:51:15 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:40.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.791 --rc genhtml_branch_coverage=1 00:04:40.791 --rc genhtml_function_coverage=1 00:04:40.791 --rc genhtml_legend=1 00:04:40.791 --rc geninfo_all_blocks=1 00:04:40.791 --rc geninfo_unexecuted_blocks=1 00:04:40.791 00:04:40.791 ' 00:04:40.791 16:51:15 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:40.791 16:51:15 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:40.791 16:51:15 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:40.791 16:51:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.791 16:51:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.791 16:51:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.791 ************************************ 00:04:40.791 START TEST skip_rpc 00:04:40.791 ************************************ 00:04:40.791 16:51:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:40.791 16:51:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57297 00:04:40.791 16:51:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.791 16:51:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:40.791 16:51:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:40.791 [2024-12-05 16:51:15.097723] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:40.791 [2024-12-05 16:51:15.097841] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57297 ] 00:04:41.049 [2024-12-05 16:51:15.254346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.049 [2024-12-05 16:51:15.334891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.334 16:51:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:46.334 16:51:20 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:46.334 16:51:20 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:46.334 16:51:20 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:46.334 16:51:20 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.334 16:51:20 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:46.334 16:51:20 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:46.334 16:51:20 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:46.334 16:51:20 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.334 16:51:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.334 16:51:20 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:46.334 16:51:20 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:46.334 16:51:20 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:46.334 16:51:20 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:46.334 16:51:20 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:46.334 16:51:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:46.334 16:51:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57297 00:04:46.334 16:51:20 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57297 ']' 00:04:46.334 16:51:20 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57297 00:04:46.334 16:51:20 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:46.334 16:51:20 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.334 16:51:20 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57297 00:04:46.334 16:51:20 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.334 16:51:20 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.334 16:51:20 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57297' 00:04:46.334 killing process with pid 57297 00:04:46.334 16:51:20 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57297 00:04:46.334 16:51:20 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57297 00:04:46.900 ************************************ 00:04:46.900 END TEST skip_rpc 00:04:46.900 ************************************ 00:04:46.900 00:04:46.900 real 0m6.190s 00:04:46.900 user 0m5.822s 00:04:46.900 sys 0m0.259s 00:04:46.900 16:51:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.900 16:51:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.900 16:51:21 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:46.900 16:51:21 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.900 16:51:21 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.900 16:51:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.900 ************************************ 00:04:46.900 START TEST skip_rpc_with_json 00:04:46.900 ************************************ 00:04:46.900 16:51:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:46.900 16:51:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:46.900 16:51:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57395 00:04:46.900 16:51:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:46.900 16:51:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57395 00:04:46.900 16:51:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57395 ']' 00:04:46.900 16:51:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.900 16:51:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:46.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.900 16:51:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.900 16:51:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:46.900 16:51:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:46.900 16:51:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:47.158 [2024-12-05 16:51:21.328934] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:47.158 [2024-12-05 16:51:21.329066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57395 ] 00:04:47.158 [2024-12-05 16:51:21.484004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.416 [2024-12-05 16:51:21.560218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.983 16:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:47.983 16:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:47.983 16:51:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:47.983 16:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.983 16:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.983 [2024-12-05 16:51:22.154222] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:47.983 request: 00:04:47.983 { 00:04:47.983 "trtype": "tcp", 00:04:47.983 "method": "nvmf_get_transports", 00:04:47.983 "req_id": 1 00:04:47.983 } 00:04:47.983 Got JSON-RPC error response 00:04:47.983 response: 00:04:47.983 { 00:04:47.983 "code": -19, 00:04:47.983 "message": "No such device" 00:04:47.983 } 00:04:47.983 16:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:47.983 16:51:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:47.983 16:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.983 16:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.983 [2024-12-05 16:51:22.166313] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.983 16:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.983 16:51:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:47.983 16:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:47.983 16:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.983 16:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:47.983 16:51:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:47.983 { 00:04:47.983 "subsystems": [ 00:04:47.983 { 00:04:47.983 "subsystem": "fsdev", 00:04:47.983 "config": [ 00:04:47.983 { 00:04:47.983 "method": "fsdev_set_opts", 00:04:47.983 "params": { 00:04:47.983 "fsdev_io_pool_size": 65535, 00:04:47.983 "fsdev_io_cache_size": 256 00:04:47.983 } 00:04:47.983 } 00:04:47.983 ] 00:04:47.983 }, 00:04:47.983 { 00:04:47.983 "subsystem": "keyring", 00:04:47.983 "config": [] 00:04:47.983 }, 00:04:47.983 { 00:04:47.983 "subsystem": "iobuf", 00:04:47.983 "config": [ 00:04:47.983 { 00:04:47.983 "method": "iobuf_set_options", 00:04:47.983 "params": { 00:04:47.983 "small_pool_count": 8192, 00:04:47.983 "large_pool_count": 1024, 00:04:47.983 "small_bufsize": 8192, 00:04:47.983 "large_bufsize": 135168, 00:04:47.983 "enable_numa": false 00:04:47.983 } 00:04:47.984 } 00:04:47.984 ] 00:04:47.984 }, 00:04:47.984 { 00:04:47.984 "subsystem": "sock", 00:04:47.984 "config": [ 00:04:47.984 { 00:04:47.984 "method": "sock_set_default_impl", 00:04:47.984 "params": { 00:04:47.984 "impl_name": "posix" 00:04:47.984 } 00:04:47.984 }, 00:04:47.984 { 00:04:47.984 "method": "sock_impl_set_options", 00:04:47.984 "params": { 00:04:47.984 "impl_name": "ssl", 00:04:47.984 "recv_buf_size": 4096, 00:04:47.984 "send_buf_size": 4096, 00:04:47.984 "enable_recv_pipe": true, 00:04:47.984 "enable_quickack": false, 00:04:47.984 "enable_placement_id": 0, 00:04:47.984 "enable_zerocopy_send_server": true, 00:04:47.984 "enable_zerocopy_send_client": false, 00:04:47.984 "zerocopy_threshold": 0, 00:04:47.984 "tls_version": 0, 00:04:47.984 "enable_ktls": false 00:04:47.984 } 00:04:47.984 }, 00:04:47.984 { 00:04:47.984 "method": "sock_impl_set_options", 00:04:47.984 "params": { 00:04:47.984 "impl_name": "posix", 00:04:47.984 "recv_buf_size": 2097152, 00:04:47.984 "send_buf_size": 2097152, 00:04:47.984 "enable_recv_pipe": true, 00:04:47.984 "enable_quickack": false, 00:04:47.984 "enable_placement_id": 0, 00:04:47.984 "enable_zerocopy_send_server": true, 00:04:47.984 "enable_zerocopy_send_client": false, 00:04:47.984 "zerocopy_threshold": 0, 00:04:47.984 "tls_version": 0, 00:04:47.984 "enable_ktls": false 00:04:47.984 } 00:04:47.984 } 00:04:47.984 ] 00:04:47.984 }, 00:04:47.984 { 00:04:47.984 "subsystem": "vmd", 00:04:47.984 "config": [] 00:04:47.984 }, 00:04:47.984 { 00:04:47.984 "subsystem": "accel", 00:04:47.984 "config": [ 00:04:47.984 { 00:04:47.984 "method": "accel_set_options", 00:04:47.984 "params": { 00:04:47.984 "small_cache_size": 128, 00:04:47.984 "large_cache_size": 16, 00:04:47.984 "task_count": 2048, 00:04:47.984 "sequence_count": 2048, 00:04:47.984 "buf_count": 2048 00:04:47.984 } 00:04:47.984 } 00:04:47.984 ] 00:04:47.984 }, 00:04:47.984 { 00:04:47.984 "subsystem": "bdev", 00:04:47.984 "config": [ 00:04:47.984 { 00:04:47.984 "method": "bdev_set_options", 00:04:47.984 "params": { 00:04:47.984 "bdev_io_pool_size": 65535, 00:04:47.984 "bdev_io_cache_size": 256, 00:04:47.984 "bdev_auto_examine": true, 00:04:47.984 "iobuf_small_cache_size": 128, 00:04:47.984 "iobuf_large_cache_size": 16 00:04:47.984 } 00:04:47.984 }, 00:04:47.984 { 00:04:47.984 "method": "bdev_raid_set_options", 00:04:47.984 "params": { 00:04:47.984 "process_window_size_kb": 1024, 00:04:47.984 "process_max_bandwidth_mb_sec": 0 00:04:47.984 } 00:04:47.984 }, 00:04:47.984 { 00:04:47.984 "method": "bdev_iscsi_set_options", 00:04:47.984 "params": { 00:04:47.984 "timeout_sec": 30 00:04:47.984 } 00:04:47.984 }, 00:04:47.984 { 00:04:47.984 "method": "bdev_nvme_set_options", 00:04:47.984 "params": { 00:04:47.984 "action_on_timeout": "none", 00:04:47.984 "timeout_us": 0, 00:04:47.984 "timeout_admin_us": 0, 00:04:47.984 "keep_alive_timeout_ms": 10000, 00:04:47.984 "arbitration_burst": 0, 00:04:47.984 "low_priority_weight": 0, 00:04:47.984 "medium_priority_weight": 0, 00:04:47.984 "high_priority_weight": 0, 00:04:47.984 "nvme_adminq_poll_period_us": 10000, 00:04:47.984 "nvme_ioq_poll_period_us": 0, 00:04:47.984 "io_queue_requests": 0, 00:04:47.984 "delay_cmd_submit": true, 00:04:47.984 "transport_retry_count": 4, 00:04:47.984 "bdev_retry_count": 3, 00:04:47.984 "transport_ack_timeout": 0, 00:04:47.984 "ctrlr_loss_timeout_sec": 0, 00:04:47.984 "reconnect_delay_sec": 0, 00:04:47.984 "fast_io_fail_timeout_sec": 0, 00:04:47.984 "disable_auto_failback": false, 00:04:47.984 "generate_uuids": false, 00:04:47.984 "transport_tos": 0, 00:04:47.984 "nvme_error_stat": false, 00:04:47.984 "rdma_srq_size": 0, 00:04:47.984 "io_path_stat": false, 00:04:47.984 "allow_accel_sequence": false, 00:04:47.984 "rdma_max_cq_size": 0, 00:04:47.984 "rdma_cm_event_timeout_ms": 0, 00:04:47.984 "dhchap_digests": [ 00:04:47.984 "sha256", 00:04:47.984 "sha384", 00:04:47.984 "sha512" 00:04:47.984 ], 00:04:47.984 "dhchap_dhgroups": [ 00:04:47.984 "null", 00:04:47.984 "ffdhe2048", 00:04:47.984 "ffdhe3072", 00:04:47.984 "ffdhe4096", 00:04:47.984 "ffdhe6144", 00:04:47.984 "ffdhe8192" 00:04:47.984 ] 00:04:47.984 } 00:04:47.984 }, 00:04:47.984 { 00:04:47.984 "method": "bdev_nvme_set_hotplug", 00:04:47.984 "params": { 00:04:47.984 "period_us": 100000, 00:04:47.984 "enable": false 00:04:47.984 } 00:04:47.984 }, 00:04:47.984 { 00:04:47.984 "method": "bdev_wait_for_examine" 00:04:47.984 } 00:04:47.984 ] 00:04:47.984 }, 00:04:47.984 { 00:04:47.984 "subsystem": "scsi", 00:04:47.984 "config": null 00:04:47.984 }, 00:04:47.984 { 00:04:47.984 "subsystem": "scheduler", 00:04:47.984 "config": [ 00:04:47.984 { 00:04:47.984 "method": "framework_set_scheduler", 00:04:47.984 "params": { 00:04:47.984 "name": "static" 00:04:47.984 } 00:04:47.984 } 00:04:47.984 ] 00:04:47.984 }, 00:04:47.984 { 00:04:47.984 "subsystem": "vhost_scsi", 00:04:47.984 "config": [] 00:04:47.984 }, 00:04:47.984 { 00:04:47.984 "subsystem": "vhost_blk", 00:04:47.984 "config": [] 00:04:47.984 }, 00:04:47.984 { 00:04:47.984 "subsystem": "ublk", 00:04:47.984 "config": [] 00:04:47.984 }, 00:04:47.984 { 00:04:47.984 "subsystem": "nbd", 00:04:47.984 "config": [] 00:04:47.984 }, 00:04:47.984 { 00:04:47.984 "subsystem": "nvmf", 00:04:47.984 "config": [ 00:04:47.984 { 00:04:47.984 "method": "nvmf_set_config", 00:04:47.984 "params": { 00:04:47.984 "discovery_filter": "match_any", 00:04:47.984 "admin_cmd_passthru": { 00:04:47.984 "identify_ctrlr": false 00:04:47.984 }, 00:04:47.984 "dhchap_digests": [ 00:04:47.984 "sha256", 00:04:47.984 "sha384", 00:04:47.984 "sha512" 00:04:47.984 ], 00:04:47.984 "dhchap_dhgroups": [ 00:04:47.984 "null", 00:04:47.984 "ffdhe2048", 00:04:47.984 "ffdhe3072", 00:04:47.984 "ffdhe4096", 00:04:47.984 "ffdhe6144", 00:04:47.984 "ffdhe8192" 00:04:47.984 ] 00:04:47.984 } 00:04:47.984 }, 00:04:47.984 { 00:04:47.984 "method": "nvmf_set_max_subsystems", 00:04:47.984 "params": { 00:04:47.984 "max_subsystems": 1024 00:04:47.984 } 00:04:47.984 }, 00:04:47.984 { 00:04:47.984 "method": "nvmf_set_crdt", 00:04:47.984 "params": { 00:04:47.984 "crdt1": 0, 00:04:47.984 "crdt2": 0, 00:04:47.984 "crdt3": 0 00:04:47.984 } 00:04:47.984 }, 00:04:47.984 { 00:04:47.984 "method": "nvmf_create_transport", 00:04:47.984 "params": { 00:04:47.984 "trtype": "TCP", 00:04:47.984 "max_queue_depth": 128, 00:04:47.984 "max_io_qpairs_per_ctrlr": 127, 00:04:47.984 "in_capsule_data_size": 4096, 00:04:47.984 "max_io_size": 131072, 00:04:47.984 "io_unit_size": 131072, 00:04:47.984 "max_aq_depth": 128, 00:04:47.984 "num_shared_buffers": 511, 00:04:47.984 "buf_cache_size": 4294967295, 00:04:47.984 "dif_insert_or_strip": false, 00:04:47.984 "zcopy": false, 00:04:47.984 "c2h_success": true, 00:04:47.984 "sock_priority": 0, 00:04:47.984 "abort_timeout_sec": 1, 00:04:47.984 "ack_timeout": 0, 00:04:47.984 "data_wr_pool_size": 0 00:04:47.984 } 00:04:47.984 } 00:04:47.984 ] 00:04:47.984 }, 00:04:47.984 { 00:04:47.984 "subsystem": "iscsi", 00:04:47.984 "config": [ 00:04:47.984 { 00:04:47.984 "method": "iscsi_set_options", 00:04:47.984 "params": { 00:04:47.984 "node_base": "iqn.2016-06.io.spdk", 00:04:47.984 "max_sessions": 128, 00:04:47.984 "max_connections_per_session": 2, 00:04:47.984 "max_queue_depth": 64, 00:04:47.984 "default_time2wait": 2, 00:04:47.984 "default_time2retain": 20, 00:04:47.984 "first_burst_length": 8192, 00:04:47.984 "immediate_data": true, 00:04:47.984 "allow_duplicated_isid": false, 00:04:47.984 "error_recovery_level": 0, 00:04:47.984 "nop_timeout": 60, 00:04:47.984 "nop_in_interval": 30, 00:04:47.984 "disable_chap": false, 00:04:47.984 "require_chap": false, 00:04:47.984 "mutual_chap": false, 00:04:47.984 "chap_group": 0, 00:04:47.984 "max_large_datain_per_connection": 64, 00:04:47.984 "max_r2t_per_connection": 4, 00:04:47.984 "pdu_pool_size": 36864, 00:04:47.984 "immediate_data_pool_size": 16384, 00:04:47.984 "data_out_pool_size": 2048 00:04:47.984 } 00:04:47.984 } 00:04:47.984 ] 00:04:47.984 } 00:04:47.984 ] 00:04:47.984 } 00:04:47.984 16:51:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:47.984 16:51:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57395 00:04:47.984 16:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57395 ']' 00:04:47.984 16:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57395 00:04:47.984 16:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:47.984 16:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.984 16:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57395 00:04:48.242 16:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.242 16:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.242 killing process with pid 57395 00:04:48.242 16:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57395' 00:04:48.242 16:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57395 00:04:48.242 16:51:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57395 00:04:49.181 16:51:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57429 00:04:49.181 16:51:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:49.181 16:51:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:54.469 16:51:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57429 00:04:54.469 16:51:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57429 ']' 00:04:54.469 16:51:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57429 00:04:54.469 16:51:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:54.469 16:51:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.469 16:51:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57429 00:04:54.469 16:51:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.469 killing process with pid 57429 00:04:54.469 16:51:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.469 16:51:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57429' 00:04:54.469 16:51:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57429 00:04:54.469 16:51:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57429 00:04:55.401 16:51:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:55.401 16:51:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:55.401 00:04:55.401 real 0m8.479s 00:04:55.401 user 0m8.112s 00:04:55.401 sys 0m0.575s 00:04:55.401 16:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.401 16:51:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:55.401 ************************************ 00:04:55.401 END TEST skip_rpc_with_json 00:04:55.401 ************************************ 00:04:55.401 16:51:29 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:55.401 16:51:29 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.401 16:51:29 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.401 16:51:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.659 ************************************ 00:04:55.659 START TEST skip_rpc_with_delay 00:04:55.659 ************************************ 00:04:55.659 16:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:55.659 16:51:29 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:55.659 16:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:55.659 16:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:55.659 16:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.659 16:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.659 16:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.659 16:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.659 16:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.659 16:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.659 16:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.659 16:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:55.659 16:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:55.659 [2024-12-05 16:51:29.843809] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:55.659 16:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:55.659 16:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:55.659 16:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:55.659 16:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:55.659 00:04:55.659 real 0m0.124s 00:04:55.659 user 0m0.062s 00:04:55.659 sys 0m0.062s 00:04:55.659 16:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.659 16:51:29 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:55.659 ************************************ 00:04:55.659 END TEST skip_rpc_with_delay 00:04:55.659 ************************************ 00:04:55.659 16:51:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:55.659 16:51:29 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:55.659 16:51:29 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:55.659 16:51:29 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.659 16:51:29 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.659 16:51:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.659 ************************************ 00:04:55.659 START TEST exit_on_failed_rpc_init 00:04:55.659 ************************************ 00:04:55.660 16:51:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:55.660 16:51:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57546 00:04:55.660 16:51:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57546 00:04:55.660 16:51:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57546 ']' 00:04:55.660 16:51:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.660 16:51:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.660 16:51:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:55.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.660 16:51:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.660 16:51:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.660 16:51:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:55.660 [2024-12-05 16:51:30.009442] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:55.660 [2024-12-05 16:51:30.009559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57546 ] 00:04:55.980 [2024-12-05 16:51:30.168562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.980 [2024-12-05 16:51:30.264637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.566 16:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.566 16:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:56.566 16:51:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.566 16:51:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:56.566 16:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:56.566 16:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:56.566 16:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.567 16:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.567 16:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.567 16:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.567 16:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.567 16:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.567 16:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.567 16:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:56.567 16:51:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:56.567 [2024-12-05 16:51:30.922850] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:56.567 [2024-12-05 16:51:30.922979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57564 ] 00:04:56.827 [2024-12-05 16:51:31.075925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.827 [2024-12-05 16:51:31.171349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.827 [2024-12-05 16:51:31.171429] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:56.827 [2024-12-05 16:51:31.171442] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:56.827 [2024-12-05 16:51:31.171456] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:57.085 16:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:57.085 16:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:57.085 16:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:57.085 16:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:57.085 16:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:57.085 16:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:57.085 16:51:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:57.085 16:51:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57546 00:04:57.085 16:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57546 ']' 00:04:57.085 16:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57546 00:04:57.085 16:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:57.085 16:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.085 16:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57546 00:04:57.085 16:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.085 16:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.085 16:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57546' 00:04:57.085 killing process with pid 57546 00:04:57.085 16:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57546 00:04:57.085 16:51:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57546 00:04:58.460 00:04:58.460 real 0m2.652s 00:04:58.460 user 0m2.946s 00:04:58.460 sys 0m0.413s 00:04:58.460 16:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.460 16:51:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:58.460 ************************************ 00:04:58.460 END TEST exit_on_failed_rpc_init 00:04:58.460 ************************************ 00:04:58.460 16:51:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:58.460 ************************************ 00:04:58.460 END TEST skip_rpc 00:04:58.460 ************************************ 00:04:58.460 00:04:58.460 real 0m17.747s 00:04:58.460 user 0m17.072s 00:04:58.460 sys 0m1.472s 00:04:58.460 16:51:32 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.461 16:51:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.461 16:51:32 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:58.461 16:51:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.461 16:51:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.461 16:51:32 -- common/autotest_common.sh@10 -- # set +x 00:04:58.461 ************************************ 00:04:58.461 START TEST rpc_client 00:04:58.461 ************************************ 00:04:58.461 16:51:32 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:58.461 * Looking for test storage... 00:04:58.461 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:58.461 16:51:32 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:58.461 16:51:32 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:58.461 16:51:32 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:58.461 16:51:32 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.461 16:51:32 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:58.461 16:51:32 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.461 16:51:32 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:58.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.461 --rc genhtml_branch_coverage=1 00:04:58.461 --rc genhtml_function_coverage=1 00:04:58.461 --rc genhtml_legend=1 00:04:58.461 --rc geninfo_all_blocks=1 00:04:58.461 --rc geninfo_unexecuted_blocks=1 00:04:58.461 00:04:58.461 ' 00:04:58.461 16:51:32 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:58.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.461 --rc genhtml_branch_coverage=1 00:04:58.461 --rc genhtml_function_coverage=1 00:04:58.461 --rc genhtml_legend=1 00:04:58.461 --rc geninfo_all_blocks=1 00:04:58.461 --rc geninfo_unexecuted_blocks=1 00:04:58.461 00:04:58.461 ' 00:04:58.461 16:51:32 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:58.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.461 --rc genhtml_branch_coverage=1 00:04:58.461 --rc genhtml_function_coverage=1 00:04:58.461 --rc genhtml_legend=1 00:04:58.461 --rc geninfo_all_blocks=1 00:04:58.461 --rc geninfo_unexecuted_blocks=1 00:04:58.461 00:04:58.461 ' 00:04:58.461 16:51:32 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:58.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.461 --rc genhtml_branch_coverage=1 00:04:58.461 --rc genhtml_function_coverage=1 00:04:58.461 --rc genhtml_legend=1 00:04:58.461 --rc geninfo_all_blocks=1 00:04:58.461 --rc geninfo_unexecuted_blocks=1 00:04:58.461 00:04:58.461 ' 00:04:58.461 16:51:32 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:58.461 OK 00:04:58.722 16:51:32 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:58.722 00:04:58.722 real 0m0.188s 00:04:58.722 user 0m0.104s 00:04:58.722 sys 0m0.094s 00:04:58.722 16:51:32 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.722 16:51:32 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:58.722 ************************************ 00:04:58.722 END TEST rpc_client 00:04:58.722 ************************************ 00:04:58.722 16:51:32 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:58.722 16:51:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.722 16:51:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.722 16:51:32 -- common/autotest_common.sh@10 -- # set +x 00:04:58.722 ************************************ 00:04:58.722 START TEST json_config 00:04:58.722 ************************************ 00:04:58.722 16:51:32 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:58.722 16:51:32 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:58.722 16:51:32 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:58.722 16:51:32 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:58.722 16:51:32 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:58.722 16:51:32 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.722 16:51:32 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.722 16:51:32 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.722 16:51:32 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.722 16:51:32 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.722 16:51:32 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.722 16:51:32 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.722 16:51:32 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.722 16:51:32 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.722 16:51:32 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.722 16:51:32 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.722 16:51:32 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:58.722 16:51:32 json_config -- scripts/common.sh@345 -- # : 1 00:04:58.722 16:51:32 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.722 16:51:32 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.722 16:51:32 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:58.722 16:51:32 json_config -- scripts/common.sh@353 -- # local d=1 00:04:58.722 16:51:32 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.722 16:51:32 json_config -- scripts/common.sh@355 -- # echo 1 00:04:58.722 16:51:32 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.722 16:51:32 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:58.722 16:51:32 json_config -- scripts/common.sh@353 -- # local d=2 00:04:58.722 16:51:32 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.722 16:51:32 json_config -- scripts/common.sh@355 -- # echo 2 00:04:58.722 16:51:32 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.722 16:51:32 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.722 16:51:32 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.722 16:51:32 json_config -- scripts/common.sh@368 -- # return 0 00:04:58.722 16:51:32 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.722 16:51:32 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:58.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.722 --rc genhtml_branch_coverage=1 00:04:58.722 --rc genhtml_function_coverage=1 00:04:58.722 --rc genhtml_legend=1 00:04:58.722 --rc geninfo_all_blocks=1 00:04:58.722 --rc geninfo_unexecuted_blocks=1 00:04:58.722 00:04:58.722 ' 00:04:58.722 16:51:32 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:58.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.722 --rc genhtml_branch_coverage=1 00:04:58.722 --rc genhtml_function_coverage=1 00:04:58.722 --rc genhtml_legend=1 00:04:58.722 --rc geninfo_all_blocks=1 00:04:58.722 --rc geninfo_unexecuted_blocks=1 00:04:58.722 00:04:58.722 ' 00:04:58.722 16:51:32 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:58.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.722 --rc genhtml_branch_coverage=1 00:04:58.722 --rc genhtml_function_coverage=1 00:04:58.722 --rc genhtml_legend=1 00:04:58.722 --rc geninfo_all_blocks=1 00:04:58.722 --rc geninfo_unexecuted_blocks=1 00:04:58.722 00:04:58.722 ' 00:04:58.722 16:51:33 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:58.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.722 --rc genhtml_branch_coverage=1 00:04:58.722 --rc genhtml_function_coverage=1 00:04:58.722 --rc genhtml_legend=1 00:04:58.723 --rc geninfo_all_blocks=1 00:04:58.723 --rc geninfo_unexecuted_blocks=1 00:04:58.723 00:04:58.723 ' 00:04:58.723 16:51:33 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2138609c-c320-4c7c-acc3-736a9e124d02 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=2138609c-c320-4c7c-acc3-736a9e124d02 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:58.723 16:51:33 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:58.723 16:51:33 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.723 16:51:33 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.723 16:51:33 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.723 16:51:33 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.723 16:51:33 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.723 16:51:33 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.723 16:51:33 json_config -- paths/export.sh@5 -- # export PATH 00:04:58.723 16:51:33 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@51 -- # : 0 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:58.723 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:58.723 16:51:33 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:58.723 16:51:33 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:58.723 16:51:33 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:58.723 16:51:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:58.723 16:51:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:58.723 16:51:33 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:58.723 16:51:33 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:58.723 WARNING: No tests are enabled so not running JSON configuration tests 00:04:58.723 16:51:33 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:58.723 00:04:58.723 real 0m0.136s 00:04:58.723 user 0m0.087s 00:04:58.723 sys 0m0.049s 00:04:58.723 16:51:33 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.723 16:51:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.723 ************************************ 00:04:58.723 END TEST json_config 00:04:58.723 ************************************ 00:04:58.723 16:51:33 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:58.723 16:51:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:58.723 16:51:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:58.723 16:51:33 -- common/autotest_common.sh@10 -- # set +x 00:04:58.723 ************************************ 00:04:58.723 START TEST json_config_extra_key 00:04:58.723 ************************************ 00:04:58.723 16:51:33 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:58.984 16:51:33 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:58.984 16:51:33 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:58.984 16:51:33 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:58.985 16:51:33 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:58.985 16:51:33 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.985 16:51:33 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:58.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.985 --rc genhtml_branch_coverage=1 00:04:58.985 --rc genhtml_function_coverage=1 00:04:58.985 --rc genhtml_legend=1 00:04:58.985 --rc geninfo_all_blocks=1 00:04:58.985 --rc geninfo_unexecuted_blocks=1 00:04:58.985 00:04:58.985 ' 00:04:58.985 16:51:33 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:58.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.985 --rc genhtml_branch_coverage=1 00:04:58.985 --rc genhtml_function_coverage=1 00:04:58.985 --rc genhtml_legend=1 00:04:58.985 --rc geninfo_all_blocks=1 00:04:58.985 --rc geninfo_unexecuted_blocks=1 00:04:58.985 00:04:58.985 ' 00:04:58.985 16:51:33 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:58.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.985 --rc genhtml_branch_coverage=1 00:04:58.985 --rc genhtml_function_coverage=1 00:04:58.985 --rc genhtml_legend=1 00:04:58.985 --rc geninfo_all_blocks=1 00:04:58.985 --rc geninfo_unexecuted_blocks=1 00:04:58.985 00:04:58.985 ' 00:04:58.985 16:51:33 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:58.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.985 --rc genhtml_branch_coverage=1 00:04:58.985 --rc genhtml_function_coverage=1 00:04:58.985 --rc genhtml_legend=1 00:04:58.985 --rc geninfo_all_blocks=1 00:04:58.985 --rc geninfo_unexecuted_blocks=1 00:04:58.985 00:04:58.985 ' 00:04:58.985 16:51:33 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2138609c-c320-4c7c-acc3-736a9e124d02 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=2138609c-c320-4c7c-acc3-736a9e124d02 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:58.985 16:51:33 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:58.985 16:51:33 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.985 16:51:33 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.985 16:51:33 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.985 16:51:33 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:58.985 16:51:33 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:58.985 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:58.985 16:51:33 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:58.985 16:51:33 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:58.985 16:51:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:58.985 16:51:33 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:58.985 16:51:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:58.985 16:51:33 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:58.985 16:51:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:58.985 16:51:33 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:58.985 16:51:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:58.985 16:51:33 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:58.985 16:51:33 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:58.985 INFO: launching applications... 00:04:58.985 16:51:33 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:58.985 16:51:33 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:58.985 16:51:33 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:58.985 16:51:33 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:58.985 16:51:33 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:58.985 16:51:33 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:58.985 16:51:33 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:58.985 16:51:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.985 16:51:33 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.986 16:51:33 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57758 00:04:58.986 Waiting for target to run... 00:04:58.986 16:51:33 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:58.986 16:51:33 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57758 /var/tmp/spdk_tgt.sock 00:04:58.986 16:51:33 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57758 ']' 00:04:58.986 16:51:33 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:58.986 16:51:33 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:58.986 16:51:33 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:58.986 16:51:33 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:58.986 16:51:33 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.986 16:51:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:58.986 [2024-12-05 16:51:33.259568] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:04:58.986 [2024-12-05 16:51:33.259667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57758 ] 00:04:59.246 [2024-12-05 16:51:33.570908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.504 [2024-12-05 16:51:33.659192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.072 16:51:34 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.072 16:51:34 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:00.072 00:05:00.072 16:51:34 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:00.072 INFO: shutting down applications... 00:05:00.072 16:51:34 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:00.072 16:51:34 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:00.072 16:51:34 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:00.072 16:51:34 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:00.072 16:51:34 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57758 ]] 00:05:00.072 16:51:34 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57758 00:05:00.072 16:51:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:00.072 16:51:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.072 16:51:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57758 00:05:00.072 16:51:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:00.332 16:51:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:00.332 16:51:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.332 16:51:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57758 00:05:00.332 16:51:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:00.901 16:51:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:00.901 16:51:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:00.901 16:51:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57758 00:05:00.901 16:51:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:01.472 16:51:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:01.472 16:51:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:01.472 16:51:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57758 00:05:01.472 16:51:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:02.042 16:51:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:02.042 16:51:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.042 16:51:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57758 00:05:02.042 16:51:36 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:02.042 16:51:36 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:02.042 SPDK target shutdown done 00:05:02.042 16:51:36 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:02.042 16:51:36 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:02.042 Success 00:05:02.042 16:51:36 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:02.042 ************************************ 00:05:02.042 END TEST json_config_extra_key 00:05:02.042 ************************************ 00:05:02.042 00:05:02.042 real 0m3.111s 00:05:02.042 user 0m2.709s 00:05:02.042 sys 0m0.376s 00:05:02.042 16:51:36 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.042 16:51:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:02.043 16:51:36 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:02.043 16:51:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.043 16:51:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.043 16:51:36 -- common/autotest_common.sh@10 -- # set +x 00:05:02.043 ************************************ 00:05:02.043 START TEST alias_rpc 00:05:02.043 ************************************ 00:05:02.043 16:51:36 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:02.043 * Looking for test storage... 00:05:02.043 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:02.043 16:51:36 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:02.043 16:51:36 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:02.043 16:51:36 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:02.043 16:51:36 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.043 16:51:36 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:02.043 16:51:36 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.043 16:51:36 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:02.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.043 --rc genhtml_branch_coverage=1 00:05:02.043 --rc genhtml_function_coverage=1 00:05:02.043 --rc genhtml_legend=1 00:05:02.043 --rc geninfo_all_blocks=1 00:05:02.043 --rc geninfo_unexecuted_blocks=1 00:05:02.043 00:05:02.043 ' 00:05:02.043 16:51:36 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:02.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.043 --rc genhtml_branch_coverage=1 00:05:02.043 --rc genhtml_function_coverage=1 00:05:02.043 --rc genhtml_legend=1 00:05:02.043 --rc geninfo_all_blocks=1 00:05:02.043 --rc geninfo_unexecuted_blocks=1 00:05:02.043 00:05:02.043 ' 00:05:02.043 16:51:36 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:02.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.043 --rc genhtml_branch_coverage=1 00:05:02.043 --rc genhtml_function_coverage=1 00:05:02.043 --rc genhtml_legend=1 00:05:02.043 --rc geninfo_all_blocks=1 00:05:02.043 --rc geninfo_unexecuted_blocks=1 00:05:02.043 00:05:02.043 ' 00:05:02.043 16:51:36 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:02.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.043 --rc genhtml_branch_coverage=1 00:05:02.043 --rc genhtml_function_coverage=1 00:05:02.043 --rc genhtml_legend=1 00:05:02.043 --rc geninfo_all_blocks=1 00:05:02.043 --rc geninfo_unexecuted_blocks=1 00:05:02.043 00:05:02.043 ' 00:05:02.043 16:51:36 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:02.043 16:51:36 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57851 00:05:02.043 16:51:36 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57851 00:05:02.043 16:51:36 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.043 16:51:36 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57851 ']' 00:05:02.043 16:51:36 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.043 16:51:36 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.043 16:51:36 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.043 16:51:36 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.043 16:51:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.303 [2024-12-05 16:51:36.408192] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:02.303 [2024-12-05 16:51:36.408282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57851 ] 00:05:02.303 [2024-12-05 16:51:36.563542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.303 [2024-12-05 16:51:36.659931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.245 16:51:37 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.245 16:51:37 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:03.245 16:51:37 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:03.245 16:51:37 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57851 00:05:03.245 16:51:37 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57851 ']' 00:05:03.245 16:51:37 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57851 00:05:03.245 16:51:37 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:03.245 16:51:37 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.245 16:51:37 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57851 00:05:03.245 16:51:37 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.245 16:51:37 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.245 killing process with pid 57851 00:05:03.245 16:51:37 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57851' 00:05:03.245 16:51:37 alias_rpc -- common/autotest_common.sh@973 -- # kill 57851 00:05:03.245 16:51:37 alias_rpc -- common/autotest_common.sh@978 -- # wait 57851 00:05:04.630 00:05:04.630 real 0m2.776s 00:05:04.630 user 0m2.942s 00:05:04.630 sys 0m0.362s 00:05:04.630 16:51:38 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.630 16:51:38 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.630 ************************************ 00:05:04.630 END TEST alias_rpc 00:05:04.630 ************************************ 00:05:04.891 16:51:39 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:04.891 16:51:39 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:04.891 16:51:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.891 16:51:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.891 16:51:39 -- common/autotest_common.sh@10 -- # set +x 00:05:04.891 ************************************ 00:05:04.891 START TEST spdkcli_tcp 00:05:04.891 ************************************ 00:05:04.891 16:51:39 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:04.891 * Looking for test storage... 00:05:04.891 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:04.891 16:51:39 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:04.891 16:51:39 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:04.891 16:51:39 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:04.891 16:51:39 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.891 16:51:39 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:04.891 16:51:39 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.891 16:51:39 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:04.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.891 --rc genhtml_branch_coverage=1 00:05:04.891 --rc genhtml_function_coverage=1 00:05:04.891 --rc genhtml_legend=1 00:05:04.891 --rc geninfo_all_blocks=1 00:05:04.891 --rc geninfo_unexecuted_blocks=1 00:05:04.891 00:05:04.891 ' 00:05:04.891 16:51:39 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:04.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.891 --rc genhtml_branch_coverage=1 00:05:04.891 --rc genhtml_function_coverage=1 00:05:04.891 --rc genhtml_legend=1 00:05:04.891 --rc geninfo_all_blocks=1 00:05:04.891 --rc geninfo_unexecuted_blocks=1 00:05:04.891 00:05:04.891 ' 00:05:04.891 16:51:39 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:04.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.891 --rc genhtml_branch_coverage=1 00:05:04.891 --rc genhtml_function_coverage=1 00:05:04.891 --rc genhtml_legend=1 00:05:04.891 --rc geninfo_all_blocks=1 00:05:04.891 --rc geninfo_unexecuted_blocks=1 00:05:04.891 00:05:04.891 ' 00:05:04.891 16:51:39 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:04.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.891 --rc genhtml_branch_coverage=1 00:05:04.891 --rc genhtml_function_coverage=1 00:05:04.891 --rc genhtml_legend=1 00:05:04.891 --rc geninfo_all_blocks=1 00:05:04.891 --rc geninfo_unexecuted_blocks=1 00:05:04.891 00:05:04.891 ' 00:05:04.891 16:51:39 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:04.891 16:51:39 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:04.891 16:51:39 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:04.891 16:51:39 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:04.891 16:51:39 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:04.891 16:51:39 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:04.891 16:51:39 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:04.891 16:51:39 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.891 16:51:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:04.891 16:51:39 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57941 00:05:04.892 16:51:39 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57941 00:05:04.892 16:51:39 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57941 ']' 00:05:04.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.892 16:51:39 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.892 16:51:39 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:04.892 16:51:39 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.892 16:51:39 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:04.892 16:51:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:04.892 16:51:39 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:04.892 [2024-12-05 16:51:39.254381] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:04.892 [2024-12-05 16:51:39.254495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57941 ] 00:05:05.152 [2024-12-05 16:51:39.409445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:05.152 [2024-12-05 16:51:39.487844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.152 [2024-12-05 16:51:39.487939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.724 16:51:40 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.724 16:51:40 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:05.724 16:51:40 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57958 00:05:05.724 16:51:40 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:05.724 16:51:40 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:05.985 [ 00:05:05.985 "bdev_malloc_delete", 00:05:05.985 "bdev_malloc_create", 00:05:05.985 "bdev_null_resize", 00:05:05.985 "bdev_null_delete", 00:05:05.985 "bdev_null_create", 00:05:05.985 "bdev_nvme_cuse_unregister", 00:05:05.985 "bdev_nvme_cuse_register", 00:05:05.985 "bdev_opal_new_user", 00:05:05.985 "bdev_opal_set_lock_state", 00:05:05.985 "bdev_opal_delete", 00:05:05.985 "bdev_opal_get_info", 00:05:05.985 "bdev_opal_create", 00:05:05.985 "bdev_nvme_opal_revert", 00:05:05.985 "bdev_nvme_opal_init", 00:05:05.985 "bdev_nvme_send_cmd", 00:05:05.985 "bdev_nvme_set_keys", 00:05:05.985 "bdev_nvme_get_path_iostat", 00:05:05.985 "bdev_nvme_get_mdns_discovery_info", 00:05:05.985 "bdev_nvme_stop_mdns_discovery", 00:05:05.985 "bdev_nvme_start_mdns_discovery", 00:05:05.985 "bdev_nvme_set_multipath_policy", 00:05:05.985 "bdev_nvme_set_preferred_path", 00:05:05.985 "bdev_nvme_get_io_paths", 00:05:05.985 "bdev_nvme_remove_error_injection", 00:05:05.985 "bdev_nvme_add_error_injection", 00:05:05.985 "bdev_nvme_get_discovery_info", 00:05:05.985 "bdev_nvme_stop_discovery", 00:05:05.985 "bdev_nvme_start_discovery", 00:05:05.985 "bdev_nvme_get_controller_health_info", 00:05:05.985 "bdev_nvme_disable_controller", 00:05:05.985 "bdev_nvme_enable_controller", 00:05:05.985 "bdev_nvme_reset_controller", 00:05:05.985 "bdev_nvme_get_transport_statistics", 00:05:05.985 "bdev_nvme_apply_firmware", 00:05:05.985 "bdev_nvme_detach_controller", 00:05:05.985 "bdev_nvme_get_controllers", 00:05:05.985 "bdev_nvme_attach_controller", 00:05:05.985 "bdev_nvme_set_hotplug", 00:05:05.985 "bdev_nvme_set_options", 00:05:05.985 "bdev_passthru_delete", 00:05:05.985 "bdev_passthru_create", 00:05:05.985 "bdev_lvol_set_parent_bdev", 00:05:05.985 "bdev_lvol_set_parent", 00:05:05.985 "bdev_lvol_check_shallow_copy", 00:05:05.985 "bdev_lvol_start_shallow_copy", 00:05:05.985 "bdev_lvol_grow_lvstore", 00:05:05.985 "bdev_lvol_get_lvols", 00:05:05.985 "bdev_lvol_get_lvstores", 00:05:05.985 "bdev_lvol_delete", 00:05:05.985 "bdev_lvol_set_read_only", 00:05:05.985 "bdev_lvol_resize", 00:05:05.985 "bdev_lvol_decouple_parent", 00:05:05.985 "bdev_lvol_inflate", 00:05:05.985 "bdev_lvol_rename", 00:05:05.985 "bdev_lvol_clone_bdev", 00:05:05.985 "bdev_lvol_clone", 00:05:05.985 "bdev_lvol_snapshot", 00:05:05.985 "bdev_lvol_create", 00:05:05.985 "bdev_lvol_delete_lvstore", 00:05:05.985 "bdev_lvol_rename_lvstore", 00:05:05.985 "bdev_lvol_create_lvstore", 00:05:05.985 "bdev_raid_set_options", 00:05:05.985 "bdev_raid_remove_base_bdev", 00:05:05.985 "bdev_raid_add_base_bdev", 00:05:05.985 "bdev_raid_delete", 00:05:05.985 "bdev_raid_create", 00:05:05.985 "bdev_raid_get_bdevs", 00:05:05.985 "bdev_error_inject_error", 00:05:05.985 "bdev_error_delete", 00:05:05.985 "bdev_error_create", 00:05:05.985 "bdev_split_delete", 00:05:05.985 "bdev_split_create", 00:05:05.985 "bdev_delay_delete", 00:05:05.985 "bdev_delay_create", 00:05:05.985 "bdev_delay_update_latency", 00:05:05.985 "bdev_zone_block_delete", 00:05:05.985 "bdev_zone_block_create", 00:05:05.985 "blobfs_create", 00:05:05.985 "blobfs_detect", 00:05:05.985 "blobfs_set_cache_size", 00:05:05.985 "bdev_xnvme_delete", 00:05:05.985 "bdev_xnvme_create", 00:05:05.985 "bdev_aio_delete", 00:05:05.985 "bdev_aio_rescan", 00:05:05.985 "bdev_aio_create", 00:05:05.985 "bdev_ftl_set_property", 00:05:05.985 "bdev_ftl_get_properties", 00:05:05.985 "bdev_ftl_get_stats", 00:05:05.985 "bdev_ftl_unmap", 00:05:05.985 "bdev_ftl_unload", 00:05:05.985 "bdev_ftl_delete", 00:05:05.985 "bdev_ftl_load", 00:05:05.985 "bdev_ftl_create", 00:05:05.985 "bdev_virtio_attach_controller", 00:05:05.985 "bdev_virtio_scsi_get_devices", 00:05:05.985 "bdev_virtio_detach_controller", 00:05:05.985 "bdev_virtio_blk_set_hotplug", 00:05:05.985 "bdev_iscsi_delete", 00:05:05.985 "bdev_iscsi_create", 00:05:05.985 "bdev_iscsi_set_options", 00:05:05.985 "accel_error_inject_error", 00:05:05.985 "ioat_scan_accel_module", 00:05:05.985 "dsa_scan_accel_module", 00:05:05.985 "iaa_scan_accel_module", 00:05:05.985 "keyring_file_remove_key", 00:05:05.985 "keyring_file_add_key", 00:05:05.985 "keyring_linux_set_options", 00:05:05.985 "fsdev_aio_delete", 00:05:05.985 "fsdev_aio_create", 00:05:05.985 "iscsi_get_histogram", 00:05:05.985 "iscsi_enable_histogram", 00:05:05.985 "iscsi_set_options", 00:05:05.985 "iscsi_get_auth_groups", 00:05:05.985 "iscsi_auth_group_remove_secret", 00:05:05.985 "iscsi_auth_group_add_secret", 00:05:05.985 "iscsi_delete_auth_group", 00:05:05.985 "iscsi_create_auth_group", 00:05:05.985 "iscsi_set_discovery_auth", 00:05:05.985 "iscsi_get_options", 00:05:05.985 "iscsi_target_node_request_logout", 00:05:05.985 "iscsi_target_node_set_redirect", 00:05:05.985 "iscsi_target_node_set_auth", 00:05:05.985 "iscsi_target_node_add_lun", 00:05:05.985 "iscsi_get_stats", 00:05:05.986 "iscsi_get_connections", 00:05:05.986 "iscsi_portal_group_set_auth", 00:05:05.986 "iscsi_start_portal_group", 00:05:05.986 "iscsi_delete_portal_group", 00:05:05.986 "iscsi_create_portal_group", 00:05:05.986 "iscsi_get_portal_groups", 00:05:05.986 "iscsi_delete_target_node", 00:05:05.986 "iscsi_target_node_remove_pg_ig_maps", 00:05:05.986 "iscsi_target_node_add_pg_ig_maps", 00:05:05.986 "iscsi_create_target_node", 00:05:05.986 "iscsi_get_target_nodes", 00:05:05.986 "iscsi_delete_initiator_group", 00:05:05.986 "iscsi_initiator_group_remove_initiators", 00:05:05.986 "iscsi_initiator_group_add_initiators", 00:05:05.986 "iscsi_create_initiator_group", 00:05:05.986 "iscsi_get_initiator_groups", 00:05:05.986 "nvmf_set_crdt", 00:05:05.986 "nvmf_set_config", 00:05:05.986 "nvmf_set_max_subsystems", 00:05:05.986 "nvmf_stop_mdns_prr", 00:05:05.986 "nvmf_publish_mdns_prr", 00:05:05.986 "nvmf_subsystem_get_listeners", 00:05:05.986 "nvmf_subsystem_get_qpairs", 00:05:05.986 "nvmf_subsystem_get_controllers", 00:05:05.986 "nvmf_get_stats", 00:05:05.986 "nvmf_get_transports", 00:05:05.986 "nvmf_create_transport", 00:05:05.986 "nvmf_get_targets", 00:05:05.986 "nvmf_delete_target", 00:05:05.986 "nvmf_create_target", 00:05:05.986 "nvmf_subsystem_allow_any_host", 00:05:05.986 "nvmf_subsystem_set_keys", 00:05:05.986 "nvmf_subsystem_remove_host", 00:05:05.986 "nvmf_subsystem_add_host", 00:05:05.986 "nvmf_ns_remove_host", 00:05:05.986 "nvmf_ns_add_host", 00:05:05.986 "nvmf_subsystem_remove_ns", 00:05:05.986 "nvmf_subsystem_set_ns_ana_group", 00:05:05.986 "nvmf_subsystem_add_ns", 00:05:05.986 "nvmf_subsystem_listener_set_ana_state", 00:05:05.986 "nvmf_discovery_get_referrals", 00:05:05.986 "nvmf_discovery_remove_referral", 00:05:05.986 "nvmf_discovery_add_referral", 00:05:05.986 "nvmf_subsystem_remove_listener", 00:05:05.986 "nvmf_subsystem_add_listener", 00:05:05.986 "nvmf_delete_subsystem", 00:05:05.986 "nvmf_create_subsystem", 00:05:05.986 "nvmf_get_subsystems", 00:05:05.986 "env_dpdk_get_mem_stats", 00:05:05.986 "nbd_get_disks", 00:05:05.986 "nbd_stop_disk", 00:05:05.986 "nbd_start_disk", 00:05:05.986 "ublk_recover_disk", 00:05:05.986 "ublk_get_disks", 00:05:05.986 "ublk_stop_disk", 00:05:05.986 "ublk_start_disk", 00:05:05.986 "ublk_destroy_target", 00:05:05.986 "ublk_create_target", 00:05:05.986 "virtio_blk_create_transport", 00:05:05.986 "virtio_blk_get_transports", 00:05:05.986 "vhost_controller_set_coalescing", 00:05:05.986 "vhost_get_controllers", 00:05:05.986 "vhost_delete_controller", 00:05:05.986 "vhost_create_blk_controller", 00:05:05.986 "vhost_scsi_controller_remove_target", 00:05:05.986 "vhost_scsi_controller_add_target", 00:05:05.986 "vhost_start_scsi_controller", 00:05:05.986 "vhost_create_scsi_controller", 00:05:05.986 "thread_set_cpumask", 00:05:05.986 "scheduler_set_options", 00:05:05.986 "framework_get_governor", 00:05:05.986 "framework_get_scheduler", 00:05:05.986 "framework_set_scheduler", 00:05:05.986 "framework_get_reactors", 00:05:05.986 "thread_get_io_channels", 00:05:05.986 "thread_get_pollers", 00:05:05.986 "thread_get_stats", 00:05:05.986 "framework_monitor_context_switch", 00:05:05.986 "spdk_kill_instance", 00:05:05.986 "log_enable_timestamps", 00:05:05.986 "log_get_flags", 00:05:05.986 "log_clear_flag", 00:05:05.986 "log_set_flag", 00:05:05.986 "log_get_level", 00:05:05.986 "log_set_level", 00:05:05.986 "log_get_print_level", 00:05:05.986 "log_set_print_level", 00:05:05.986 "framework_enable_cpumask_locks", 00:05:05.986 "framework_disable_cpumask_locks", 00:05:05.986 "framework_wait_init", 00:05:05.986 "framework_start_init", 00:05:05.986 "scsi_get_devices", 00:05:05.986 "bdev_get_histogram", 00:05:05.986 "bdev_enable_histogram", 00:05:05.986 "bdev_set_qos_limit", 00:05:05.986 "bdev_set_qd_sampling_period", 00:05:05.986 "bdev_get_bdevs", 00:05:05.986 "bdev_reset_iostat", 00:05:05.986 "bdev_get_iostat", 00:05:05.986 "bdev_examine", 00:05:05.986 "bdev_wait_for_examine", 00:05:05.986 "bdev_set_options", 00:05:05.986 "accel_get_stats", 00:05:05.986 "accel_set_options", 00:05:05.986 "accel_set_driver", 00:05:05.986 "accel_crypto_key_destroy", 00:05:05.986 "accel_crypto_keys_get", 00:05:05.986 "accel_crypto_key_create", 00:05:05.986 "accel_assign_opc", 00:05:05.986 "accel_get_module_info", 00:05:05.986 "accel_get_opc_assignments", 00:05:05.986 "vmd_rescan", 00:05:05.986 "vmd_remove_device", 00:05:05.986 "vmd_enable", 00:05:05.986 "sock_get_default_impl", 00:05:05.986 "sock_set_default_impl", 00:05:05.986 "sock_impl_set_options", 00:05:05.986 "sock_impl_get_options", 00:05:05.986 "iobuf_get_stats", 00:05:05.986 "iobuf_set_options", 00:05:05.986 "keyring_get_keys", 00:05:05.986 "framework_get_pci_devices", 00:05:05.986 "framework_get_config", 00:05:05.986 "framework_get_subsystems", 00:05:05.986 "fsdev_set_opts", 00:05:05.986 "fsdev_get_opts", 00:05:05.986 "trace_get_info", 00:05:05.986 "trace_get_tpoint_group_mask", 00:05:05.986 "trace_disable_tpoint_group", 00:05:05.986 "trace_enable_tpoint_group", 00:05:05.986 "trace_clear_tpoint_mask", 00:05:05.986 "trace_set_tpoint_mask", 00:05:05.986 "notify_get_notifications", 00:05:05.986 "notify_get_types", 00:05:05.986 "spdk_get_version", 00:05:05.986 "rpc_get_methods" 00:05:05.986 ] 00:05:05.986 16:51:40 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:05.986 16:51:40 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:05.986 16:51:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:05.986 16:51:40 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:05.986 16:51:40 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57941 00:05:05.986 16:51:40 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57941 ']' 00:05:05.986 16:51:40 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57941 00:05:05.986 16:51:40 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:05.986 16:51:40 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:05.986 16:51:40 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57941 00:05:05.986 16:51:40 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:05.986 16:51:40 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:05.986 16:51:40 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57941' 00:05:05.986 killing process with pid 57941 00:05:05.986 16:51:40 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57941 00:05:05.986 16:51:40 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57941 00:05:07.398 00:05:07.398 real 0m2.484s 00:05:07.398 user 0m4.452s 00:05:07.398 sys 0m0.414s 00:05:07.398 16:51:41 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.399 16:51:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:07.399 ************************************ 00:05:07.399 END TEST spdkcli_tcp 00:05:07.399 ************************************ 00:05:07.399 16:51:41 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:07.399 16:51:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.399 16:51:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.399 16:51:41 -- common/autotest_common.sh@10 -- # set +x 00:05:07.399 ************************************ 00:05:07.399 START TEST dpdk_mem_utility 00:05:07.399 ************************************ 00:05:07.399 16:51:41 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:07.399 * Looking for test storage... 00:05:07.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:07.399 16:51:41 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:07.399 16:51:41 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:07.399 16:51:41 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:07.399 16:51:41 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.399 16:51:41 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:07.399 16:51:41 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.399 16:51:41 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:07.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.399 --rc genhtml_branch_coverage=1 00:05:07.399 --rc genhtml_function_coverage=1 00:05:07.399 --rc genhtml_legend=1 00:05:07.399 --rc geninfo_all_blocks=1 00:05:07.399 --rc geninfo_unexecuted_blocks=1 00:05:07.399 00:05:07.399 ' 00:05:07.399 16:51:41 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:07.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.399 --rc genhtml_branch_coverage=1 00:05:07.399 --rc genhtml_function_coverage=1 00:05:07.399 --rc genhtml_legend=1 00:05:07.399 --rc geninfo_all_blocks=1 00:05:07.399 --rc geninfo_unexecuted_blocks=1 00:05:07.399 00:05:07.399 ' 00:05:07.399 16:51:41 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:07.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.399 --rc genhtml_branch_coverage=1 00:05:07.399 --rc genhtml_function_coverage=1 00:05:07.399 --rc genhtml_legend=1 00:05:07.399 --rc geninfo_all_blocks=1 00:05:07.399 --rc geninfo_unexecuted_blocks=1 00:05:07.399 00:05:07.399 ' 00:05:07.399 16:51:41 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:07.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.399 --rc genhtml_branch_coverage=1 00:05:07.399 --rc genhtml_function_coverage=1 00:05:07.399 --rc genhtml_legend=1 00:05:07.399 --rc geninfo_all_blocks=1 00:05:07.399 --rc geninfo_unexecuted_blocks=1 00:05:07.399 00:05:07.399 ' 00:05:07.399 16:51:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:07.399 16:51:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58047 00:05:07.399 16:51:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58047 00:05:07.399 16:51:41 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58047 ']' 00:05:07.399 16:51:41 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.399 16:51:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:07.399 16:51:41 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.399 16:51:41 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.399 16:51:41 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.399 16:51:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:07.667 [2024-12-05 16:51:41.764910] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:07.667 [2024-12-05 16:51:41.765038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58047 ] 00:05:07.667 [2024-12-05 16:51:41.920836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.667 [2024-12-05 16:51:41.998059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.234 16:51:42 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.234 16:51:42 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:08.234 16:51:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:08.234 16:51:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:08.234 16:51:42 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:08.234 16:51:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:08.493 { 00:05:08.493 "filename": "/tmp/spdk_mem_dump.txt" 00:05:08.493 } 00:05:08.493 16:51:42 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.493 16:51:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:08.493 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:08.493 1 heaps totaling size 824.000000 MiB 00:05:08.493 size: 824.000000 MiB heap id: 0 00:05:08.493 end heaps---------- 00:05:08.493 9 mempools totaling size 603.782043 MiB 00:05:08.493 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:08.493 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:08.493 size: 100.555481 MiB name: bdev_io_58047 00:05:08.493 size: 50.003479 MiB name: msgpool_58047 00:05:08.493 size: 36.509338 MiB name: fsdev_io_58047 00:05:08.493 size: 21.763794 MiB name: PDU_Pool 00:05:08.493 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:08.493 size: 4.133484 MiB name: evtpool_58047 00:05:08.493 size: 0.026123 MiB name: Session_Pool 00:05:08.493 end mempools------- 00:05:08.493 6 memzones totaling size 4.142822 MiB 00:05:08.493 size: 1.000366 MiB name: RG_ring_0_58047 00:05:08.493 size: 1.000366 MiB name: RG_ring_1_58047 00:05:08.493 size: 1.000366 MiB name: RG_ring_4_58047 00:05:08.493 size: 1.000366 MiB name: RG_ring_5_58047 00:05:08.493 size: 0.125366 MiB name: RG_ring_2_58047 00:05:08.493 size: 0.015991 MiB name: RG_ring_3_58047 00:05:08.493 end memzones------- 00:05:08.493 16:51:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:08.493 heap id: 0 total size: 824.000000 MiB number of busy elements: 327 number of free elements: 18 00:05:08.493 list of free elements. size: 16.778442 MiB 00:05:08.493 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:08.493 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:08.493 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:08.493 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:08.493 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:08.493 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:08.493 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:08.493 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:08.493 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:08.493 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:08.493 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:08.493 element at address: 0x20001b400000 with size: 0.559021 MiB 00:05:08.493 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:08.493 element at address: 0x200019600000 with size: 0.487976 MiB 00:05:08.493 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:08.493 element at address: 0x200012c00000 with size: 0.434204 MiB 00:05:08.493 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:08.493 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:08.493 list of standard malloc elements. size: 199.290649 MiB 00:05:08.493 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:08.493 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:08.493 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:08.493 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:08.493 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:08.493 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:08.493 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:08.493 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:08.493 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:08.493 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:08.493 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:08.493 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:08.493 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:08.493 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:08.493 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:08.493 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:08.493 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:08.494 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:08.494 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:08.494 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:08.494 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:08.494 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:08.494 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:08.494 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:08.494 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:08.494 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:08.494 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:08.494 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:08.494 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:08.494 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:08.494 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:08.494 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:08.494 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:08.494 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:08.494 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:08.494 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:08.494 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b48f1c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b48f2c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b48f3c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b48f4c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b48f5c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:08.494 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:08.494 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:08.494 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:08.494 list of memzone associated elements. size: 607.930908 MiB 00:05:08.494 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:08.494 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:08.494 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:08.494 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:08.494 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:08.494 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58047_0 00:05:08.494 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:08.494 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58047_0 00:05:08.494 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:08.494 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58047_0 00:05:08.494 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:08.494 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:08.494 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:08.494 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:08.494 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:08.494 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58047_0 00:05:08.494 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:08.494 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58047 00:05:08.494 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:08.494 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58047 00:05:08.494 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:08.494 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:08.494 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:08.494 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:08.494 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:08.494 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:08.494 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:08.494 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:08.494 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:08.494 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58047 00:05:08.494 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:08.494 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58047 00:05:08.494 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:08.494 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58047 00:05:08.494 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:08.494 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58047 00:05:08.494 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:08.494 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58047 00:05:08.494 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:08.494 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58047 00:05:08.494 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:08.494 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:08.494 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:08.494 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:08.494 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:08.494 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:08.494 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:08.494 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58047 00:05:08.494 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:08.494 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58047 00:05:08.494 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:08.494 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:08.494 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:08.494 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:08.494 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:08.494 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58047 00:05:08.494 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:08.494 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:08.494 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:08.494 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58047 00:05:08.494 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:08.494 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58047 00:05:08.494 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:08.494 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58047 00:05:08.494 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:08.494 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:08.494 16:51:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:08.494 16:51:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58047 00:05:08.495 16:51:42 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58047 ']' 00:05:08.495 16:51:42 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58047 00:05:08.495 16:51:42 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:08.495 16:51:42 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.495 16:51:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58047 00:05:08.495 16:51:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.495 16:51:42 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.495 killing process with pid 58047 00:05:08.495 16:51:42 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58047' 00:05:08.495 16:51:42 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58047 00:05:08.495 16:51:42 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58047 00:05:09.869 00:05:09.869 real 0m2.356s 00:05:09.869 user 0m2.408s 00:05:09.869 sys 0m0.360s 00:05:09.869 16:51:43 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.869 16:51:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:09.869 ************************************ 00:05:09.869 END TEST dpdk_mem_utility 00:05:09.869 ************************************ 00:05:09.869 16:51:43 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:09.869 16:51:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.869 16:51:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.869 16:51:43 -- common/autotest_common.sh@10 -- # set +x 00:05:09.869 ************************************ 00:05:09.869 START TEST event 00:05:09.869 ************************************ 00:05:09.869 16:51:43 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:09.869 * Looking for test storage... 00:05:09.869 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:09.869 16:51:44 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:09.869 16:51:44 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:09.869 16:51:44 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:09.869 16:51:44 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:09.869 16:51:44 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.869 16:51:44 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.869 16:51:44 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.869 16:51:44 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.869 16:51:44 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.869 16:51:44 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.869 16:51:44 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.869 16:51:44 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.869 16:51:44 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.869 16:51:44 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.869 16:51:44 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.869 16:51:44 event -- scripts/common.sh@344 -- # case "$op" in 00:05:09.869 16:51:44 event -- scripts/common.sh@345 -- # : 1 00:05:09.869 16:51:44 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.869 16:51:44 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.869 16:51:44 event -- scripts/common.sh@365 -- # decimal 1 00:05:09.869 16:51:44 event -- scripts/common.sh@353 -- # local d=1 00:05:09.869 16:51:44 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.869 16:51:44 event -- scripts/common.sh@355 -- # echo 1 00:05:09.869 16:51:44 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.869 16:51:44 event -- scripts/common.sh@366 -- # decimal 2 00:05:09.870 16:51:44 event -- scripts/common.sh@353 -- # local d=2 00:05:09.870 16:51:44 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.870 16:51:44 event -- scripts/common.sh@355 -- # echo 2 00:05:09.870 16:51:44 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.870 16:51:44 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.870 16:51:44 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.870 16:51:44 event -- scripts/common.sh@368 -- # return 0 00:05:09.870 16:51:44 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.870 16:51:44 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:09.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.870 --rc genhtml_branch_coverage=1 00:05:09.870 --rc genhtml_function_coverage=1 00:05:09.870 --rc genhtml_legend=1 00:05:09.870 --rc geninfo_all_blocks=1 00:05:09.870 --rc geninfo_unexecuted_blocks=1 00:05:09.870 00:05:09.870 ' 00:05:09.870 16:51:44 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:09.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.870 --rc genhtml_branch_coverage=1 00:05:09.870 --rc genhtml_function_coverage=1 00:05:09.870 --rc genhtml_legend=1 00:05:09.870 --rc geninfo_all_blocks=1 00:05:09.870 --rc geninfo_unexecuted_blocks=1 00:05:09.870 00:05:09.870 ' 00:05:09.870 16:51:44 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:09.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.870 --rc genhtml_branch_coverage=1 00:05:09.870 --rc genhtml_function_coverage=1 00:05:09.870 --rc genhtml_legend=1 00:05:09.870 --rc geninfo_all_blocks=1 00:05:09.870 --rc geninfo_unexecuted_blocks=1 00:05:09.870 00:05:09.870 ' 00:05:09.870 16:51:44 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:09.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.870 --rc genhtml_branch_coverage=1 00:05:09.870 --rc genhtml_function_coverage=1 00:05:09.870 --rc genhtml_legend=1 00:05:09.870 --rc geninfo_all_blocks=1 00:05:09.870 --rc geninfo_unexecuted_blocks=1 00:05:09.870 00:05:09.870 ' 00:05:09.870 16:51:44 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:09.870 16:51:44 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:09.870 16:51:44 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:09.870 16:51:44 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:09.870 16:51:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.870 16:51:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.870 ************************************ 00:05:09.870 START TEST event_perf 00:05:09.870 ************************************ 00:05:09.870 16:51:44 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:09.870 Running I/O for 1 seconds...[2024-12-05 16:51:44.122265] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:09.870 [2024-12-05 16:51:44.122666] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58138 ] 00:05:10.128 [2024-12-05 16:51:44.277606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:10.128 [2024-12-05 16:51:44.361596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.128 [2024-12-05 16:51:44.362231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:10.128 [2024-12-05 16:51:44.362302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.128 Running I/O for 1 seconds...[2024-12-05 16:51:44.362325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:11.505 00:05:11.505 lcore 0: 203225 00:05:11.505 lcore 1: 203227 00:05:11.505 lcore 2: 203226 00:05:11.505 lcore 3: 203223 00:05:11.505 done. 00:05:11.505 00:05:11.505 real 0m1.397s 00:05:11.505 user 0m4.192s 00:05:11.505 sys 0m0.083s 00:05:11.505 16:51:45 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.505 16:51:45 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:11.505 ************************************ 00:05:11.505 END TEST event_perf 00:05:11.505 ************************************ 00:05:11.505 16:51:45 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:11.505 16:51:45 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:11.505 16:51:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.505 16:51:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:11.505 ************************************ 00:05:11.505 START TEST event_reactor 00:05:11.505 ************************************ 00:05:11.506 16:51:45 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:11.506 [2024-12-05 16:51:45.558155] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:11.506 [2024-12-05 16:51:45.558268] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58178 ] 00:05:11.506 [2024-12-05 16:51:45.713926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.506 [2024-12-05 16:51:45.794110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.879 test_start 00:05:12.879 oneshot 00:05:12.879 tick 100 00:05:12.879 tick 100 00:05:12.879 tick 250 00:05:12.879 tick 100 00:05:12.879 tick 100 00:05:12.879 tick 100 00:05:12.879 tick 250 00:05:12.879 tick 500 00:05:12.879 tick 100 00:05:12.879 tick 100 00:05:12.879 tick 250 00:05:12.879 tick 100 00:05:12.879 tick 100 00:05:12.879 test_end 00:05:12.879 00:05:12.879 real 0m1.388s 00:05:12.879 user 0m1.199s 00:05:12.879 sys 0m0.082s 00:05:12.879 16:51:46 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.879 ************************************ 00:05:12.879 END TEST event_reactor 00:05:12.879 ************************************ 00:05:12.879 16:51:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:12.879 16:51:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:12.879 16:51:46 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:12.879 16:51:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.879 16:51:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.879 ************************************ 00:05:12.879 START TEST event_reactor_perf 00:05:12.879 ************************************ 00:05:12.879 16:51:46 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:12.879 [2024-12-05 16:51:46.991282] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:12.879 [2024-12-05 16:51:46.991390] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58209 ] 00:05:12.880 [2024-12-05 16:51:47.146527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.880 [2024-12-05 16:51:47.225465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.253 test_start 00:05:14.253 test_end 00:05:14.253 Performance: 414757 events per second 00:05:14.253 00:05:14.253 real 0m1.387s 00:05:14.253 user 0m1.217s 00:05:14.253 sys 0m0.063s 00:05:14.253 16:51:48 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.253 ************************************ 00:05:14.253 END TEST event_reactor_perf 00:05:14.253 ************************************ 00:05:14.253 16:51:48 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:14.253 16:51:48 event -- event/event.sh@49 -- # uname -s 00:05:14.253 16:51:48 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:14.253 16:51:48 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:14.253 16:51:48 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.253 16:51:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.253 16:51:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.253 ************************************ 00:05:14.253 START TEST event_scheduler 00:05:14.253 ************************************ 00:05:14.253 16:51:48 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:14.253 * Looking for test storage... 00:05:14.253 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:14.253 16:51:48 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:14.253 16:51:48 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:14.253 16:51:48 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:14.253 16:51:48 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.253 16:51:48 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:14.253 16:51:48 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.253 16:51:48 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:14.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.253 --rc genhtml_branch_coverage=1 00:05:14.253 --rc genhtml_function_coverage=1 00:05:14.253 --rc genhtml_legend=1 00:05:14.253 --rc geninfo_all_blocks=1 00:05:14.253 --rc geninfo_unexecuted_blocks=1 00:05:14.253 00:05:14.253 ' 00:05:14.253 16:51:48 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:14.253 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.253 --rc genhtml_branch_coverage=1 00:05:14.254 --rc genhtml_function_coverage=1 00:05:14.254 --rc genhtml_legend=1 00:05:14.254 --rc geninfo_all_blocks=1 00:05:14.254 --rc geninfo_unexecuted_blocks=1 00:05:14.254 00:05:14.254 ' 00:05:14.254 16:51:48 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:14.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.254 --rc genhtml_branch_coverage=1 00:05:14.254 --rc genhtml_function_coverage=1 00:05:14.254 --rc genhtml_legend=1 00:05:14.254 --rc geninfo_all_blocks=1 00:05:14.254 --rc geninfo_unexecuted_blocks=1 00:05:14.254 00:05:14.254 ' 00:05:14.254 16:51:48 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:14.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.254 --rc genhtml_branch_coverage=1 00:05:14.254 --rc genhtml_function_coverage=1 00:05:14.254 --rc genhtml_legend=1 00:05:14.254 --rc geninfo_all_blocks=1 00:05:14.254 --rc geninfo_unexecuted_blocks=1 00:05:14.254 00:05:14.254 ' 00:05:14.254 16:51:48 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:14.254 16:51:48 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58285 00:05:14.254 16:51:48 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.254 16:51:48 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58285 00:05:14.254 16:51:48 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58285 ']' 00:05:14.254 16:51:48 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:14.254 16:51:48 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.254 16:51:48 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.254 16:51:48 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.254 16:51:48 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.254 16:51:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:14.254 [2024-12-05 16:51:48.582327] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:14.254 [2024-12-05 16:51:48.582585] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58285 ] 00:05:14.511 [2024-12-05 16:51:48.736326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:14.512 [2024-12-05 16:51:48.837406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.512 [2024-12-05 16:51:48.837663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.512 [2024-12-05 16:51:48.838042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:14.512 [2024-12-05 16:51:48.838087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.077 16:51:49 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.077 16:51:49 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:15.077 16:51:49 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:15.077 16:51:49 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.077 16:51:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:15.077 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:15.077 POWER: Cannot set governor of lcore 0 to userspace 00:05:15.077 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:15.077 POWER: Cannot set governor of lcore 0 to performance 00:05:15.077 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:15.077 POWER: Cannot set governor of lcore 0 to userspace 00:05:15.077 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:15.077 POWER: Cannot set governor of lcore 0 to userspace 00:05:15.077 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:15.077 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:15.077 POWER: Unable to set Power Management Environment for lcore 0 00:05:15.077 [2024-12-05 16:51:49.431430] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:15.077 [2024-12-05 16:51:49.431449] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:15.077 [2024-12-05 16:51:49.431459] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:15.077 [2024-12-05 16:51:49.431474] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:15.077 [2024-12-05 16:51:49.431482] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:15.077 [2024-12-05 16:51:49.431491] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:15.077 16:51:49 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.077 16:51:49 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:15.077 16:51:49 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.077 16:51:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:15.335 [2024-12-05 16:51:49.654818] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:15.335 16:51:49 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.335 16:51:49 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:15.335 16:51:49 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.335 16:51:49 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.335 16:51:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:15.335 ************************************ 00:05:15.335 START TEST scheduler_create_thread 00:05:15.335 ************************************ 00:05:15.335 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:15.335 16:51:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:15.335 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.335 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.335 2 00:05:15.335 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.335 16:51:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:15.335 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.335 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.335 3 00:05:15.335 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.335 16:51:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:15.335 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.335 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.335 4 00:05:15.335 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.335 16:51:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:15.335 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.335 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.594 5 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.594 6 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.594 7 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.594 8 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.594 9 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.594 10 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.594 16:51:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.188 ************************************ 00:05:16.188 END TEST scheduler_create_thread 00:05:16.188 ************************************ 00:05:16.188 16:51:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.188 00:05:16.188 real 0m0.594s 00:05:16.188 user 0m0.011s 00:05:16.188 sys 0m0.006s 00:05:16.188 16:51:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.188 16:51:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:16.188 16:51:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:16.188 16:51:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58285 00:05:16.188 16:51:50 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58285 ']' 00:05:16.188 16:51:50 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58285 00:05:16.188 16:51:50 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:16.188 16:51:50 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.188 16:51:50 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58285 00:05:16.188 killing process with pid 58285 00:05:16.188 16:51:50 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:16.188 16:51:50 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:16.188 16:51:50 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58285' 00:05:16.188 16:51:50 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58285 00:05:16.188 16:51:50 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58285 00:05:16.445 [2024-12-05 16:51:50.739655] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:17.012 00:05:17.012 real 0m2.924s 00:05:17.012 user 0m5.593s 00:05:17.012 sys 0m0.341s 00:05:17.012 16:51:51 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.012 16:51:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:17.012 ************************************ 00:05:17.012 END TEST event_scheduler 00:05:17.012 ************************************ 00:05:17.012 16:51:51 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:17.012 16:51:51 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:17.012 16:51:51 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.012 16:51:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.012 16:51:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.012 ************************************ 00:05:17.012 START TEST app_repeat 00:05:17.012 ************************************ 00:05:17.012 16:51:51 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:17.012 16:51:51 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.012 16:51:51 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.012 16:51:51 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:17.012 16:51:51 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.012 16:51:51 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:17.012 16:51:51 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:17.012 16:51:51 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:17.012 Process app_repeat pid: 58363 00:05:17.012 spdk_app_start Round 0 00:05:17.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:17.012 16:51:51 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58363 00:05:17.012 16:51:51 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.012 16:51:51 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58363' 00:05:17.012 16:51:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:17.012 16:51:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:17.012 16:51:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58363 /var/tmp/spdk-nbd.sock 00:05:17.012 16:51:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58363 ']' 00:05:17.012 16:51:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:17.012 16:51:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.012 16:51:51 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:17.012 16:51:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:17.012 16:51:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.012 16:51:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:17.270 [2024-12-05 16:51:51.402818] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:17.270 [2024-12-05 16:51:51.403094] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58363 ] 00:05:17.270 [2024-12-05 16:51:51.556040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.270 [2024-12-05 16:51:51.635785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.270 [2024-12-05 16:51:51.635892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.204 16:51:52 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.204 16:51:52 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:18.204 16:51:52 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:18.204 Malloc0 00:05:18.204 16:51:52 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:18.466 Malloc1 00:05:18.466 16:51:52 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.466 16:51:52 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.466 16:51:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.466 16:51:52 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:18.466 16:51:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.466 16:51:52 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:18.466 16:51:52 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:18.467 16:51:52 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.467 16:51:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:18.467 16:51:52 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:18.467 16:51:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.467 16:51:52 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:18.467 16:51:52 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:18.467 16:51:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:18.467 16:51:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.467 16:51:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:18.727 /dev/nbd0 00:05:18.727 16:51:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:18.727 16:51:52 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:18.727 16:51:52 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:18.727 16:51:52 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:18.727 16:51:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:18.727 16:51:52 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:18.727 16:51:52 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:18.727 16:51:52 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:18.727 16:51:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:18.727 16:51:52 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:18.727 16:51:52 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.727 1+0 records in 00:05:18.727 1+0 records out 00:05:18.728 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370237 s, 11.1 MB/s 00:05:18.728 16:51:52 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.728 16:51:52 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:18.728 16:51:52 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.728 16:51:52 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:18.728 16:51:52 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:18.728 16:51:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.728 16:51:52 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.728 16:51:52 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:18.989 /dev/nbd1 00:05:18.989 16:51:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:18.989 16:51:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:18.989 16:51:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:18.989 16:51:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:18.989 16:51:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:18.989 16:51:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:18.989 16:51:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:18.989 16:51:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:18.989 16:51:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:18.989 16:51:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:18.989 16:51:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.989 1+0 records in 00:05:18.989 1+0 records out 00:05:18.989 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195023 s, 21.0 MB/s 00:05:18.989 16:51:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.989 16:51:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:18.989 16:51:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.989 16:51:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:18.989 16:51:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:18.989 16:51:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.989 16:51:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.989 16:51:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.989 16:51:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.990 16:51:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.251 16:51:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:19.251 { 00:05:19.251 "nbd_device": "/dev/nbd0", 00:05:19.251 "bdev_name": "Malloc0" 00:05:19.251 }, 00:05:19.251 { 00:05:19.251 "nbd_device": "/dev/nbd1", 00:05:19.251 "bdev_name": "Malloc1" 00:05:19.251 } 00:05:19.251 ]' 00:05:19.251 16:51:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.251 16:51:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:19.251 { 00:05:19.251 "nbd_device": "/dev/nbd0", 00:05:19.251 "bdev_name": "Malloc0" 00:05:19.251 }, 00:05:19.251 { 00:05:19.251 "nbd_device": "/dev/nbd1", 00:05:19.251 "bdev_name": "Malloc1" 00:05:19.251 } 00:05:19.251 ]' 00:05:19.251 16:51:53 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:19.251 /dev/nbd1' 00:05:19.251 16:51:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.251 16:51:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:19.251 /dev/nbd1' 00:05:19.251 16:51:53 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:19.251 16:51:53 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:19.251 16:51:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:19.251 16:51:53 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:19.251 16:51:53 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:19.251 16:51:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.251 16:51:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.251 16:51:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:19.251 16:51:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.251 16:51:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:19.251 16:51:53 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:19.251 256+0 records in 00:05:19.251 256+0 records out 00:05:19.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0041904 s, 250 MB/s 00:05:19.251 16:51:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.251 16:51:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:19.251 256+0 records in 00:05:19.251 256+0 records out 00:05:19.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160677 s, 65.3 MB/s 00:05:19.251 16:51:53 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:19.251 16:51:53 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:19.251 256+0 records in 00:05:19.251 256+0 records out 00:05:19.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0174962 s, 59.9 MB/s 00:05:19.251 16:51:53 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:19.251 16:51:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.251 16:51:53 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:19.251 16:51:53 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:19.251 16:51:53 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.252 16:51:53 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:19.252 16:51:53 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:19.252 16:51:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.252 16:51:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:19.252 16:51:53 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:19.252 16:51:53 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:19.252 16:51:53 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:19.252 16:51:53 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:19.252 16:51:53 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.252 16:51:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.252 16:51:53 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:19.252 16:51:53 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:19.252 16:51:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.252 16:51:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:19.516 16:51:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:19.516 16:51:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:19.516 16:51:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:19.516 16:51:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.516 16:51:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.516 16:51:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:19.516 16:51:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.516 16:51:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.516 16:51:53 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:19.516 16:51:53 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:19.516 16:51:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:19.800 16:51:53 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:19.800 16:51:53 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:19.800 16:51:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:19.800 16:51:53 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:19.800 16:51:53 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:19.800 16:51:53 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:19.800 16:51:53 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:19.800 16:51:53 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:19.800 16:51:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.800 16:51:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.800 16:51:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:19.800 16:51:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:19.800 16:51:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.800 16:51:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:19.800 16:51:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.800 16:51:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:19.800 16:51:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:19.800 16:51:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:19.800 16:51:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:19.800 16:51:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:19.800 16:51:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:19.800 16:51:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:19.800 16:51:54 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:20.060 16:51:54 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:20.631 [2024-12-05 16:51:54.977447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.896 [2024-12-05 16:51:55.051816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.896 [2024-12-05 16:51:55.051823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.896 [2024-12-05 16:51:55.148694] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:20.896 [2024-12-05 16:51:55.148746] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:23.443 spdk_app_start Round 1 00:05:23.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:23.443 16:51:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:23.443 16:51:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:23.443 16:51:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58363 /var/tmp/spdk-nbd.sock 00:05:23.443 16:51:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58363 ']' 00:05:23.443 16:51:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:23.443 16:51:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.443 16:51:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:23.443 16:51:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.443 16:51:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:23.443 16:51:57 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.443 16:51:57 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:23.443 16:51:57 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.704 Malloc0 00:05:23.704 16:51:57 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.704 Malloc1 00:05:23.966 16:51:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.966 16:51:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.966 16:51:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.966 16:51:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.966 16:51:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.966 16:51:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.966 16:51:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.966 16:51:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.966 16:51:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.966 16:51:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.966 16:51:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.966 16:51:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.966 16:51:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:23.966 16:51:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.966 16:51:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.966 16:51:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:23.966 /dev/nbd0 00:05:23.966 16:51:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:23.966 16:51:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:23.966 16:51:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:23.966 16:51:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:23.966 16:51:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:23.966 16:51:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:23.966 16:51:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:23.966 16:51:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:23.966 16:51:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:23.966 16:51:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:23.966 16:51:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.966 1+0 records in 00:05:23.966 1+0 records out 00:05:23.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000140538 s, 29.1 MB/s 00:05:23.966 16:51:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.966 16:51:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:23.966 16:51:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.966 16:51:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:23.966 16:51:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:23.966 16:51:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.966 16:51:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.966 16:51:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:24.228 /dev/nbd1 00:05:24.228 16:51:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:24.228 16:51:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:24.228 16:51:58 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:24.228 16:51:58 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:24.228 16:51:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:24.228 16:51:58 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:24.228 16:51:58 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:24.228 16:51:58 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:24.228 16:51:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:24.228 16:51:58 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:24.228 16:51:58 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:24.228 1+0 records in 00:05:24.228 1+0 records out 00:05:24.228 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000143886 s, 28.5 MB/s 00:05:24.228 16:51:58 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.228 16:51:58 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:24.228 16:51:58 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:24.228 16:51:58 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:24.228 16:51:58 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:24.228 16:51:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:24.228 16:51:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.228 16:51:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.228 16:51:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.228 16:51:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.489 16:51:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:24.489 { 00:05:24.489 "nbd_device": "/dev/nbd0", 00:05:24.489 "bdev_name": "Malloc0" 00:05:24.489 }, 00:05:24.489 { 00:05:24.489 "nbd_device": "/dev/nbd1", 00:05:24.489 "bdev_name": "Malloc1" 00:05:24.489 } 00:05:24.489 ]' 00:05:24.489 16:51:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:24.489 { 00:05:24.489 "nbd_device": "/dev/nbd0", 00:05:24.489 "bdev_name": "Malloc0" 00:05:24.489 }, 00:05:24.489 { 00:05:24.489 "nbd_device": "/dev/nbd1", 00:05:24.489 "bdev_name": "Malloc1" 00:05:24.489 } 00:05:24.489 ]' 00:05:24.489 16:51:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.489 16:51:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:24.489 /dev/nbd1' 00:05:24.489 16:51:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:24.489 /dev/nbd1' 00:05:24.489 16:51:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.489 16:51:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:24.489 16:51:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:24.489 16:51:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:24.489 16:51:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:24.489 16:51:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:24.489 16:51:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.489 16:51:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.489 16:51:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:24.489 16:51:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.489 16:51:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:24.489 16:51:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:24.489 256+0 records in 00:05:24.489 256+0 records out 00:05:24.489 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00740489 s, 142 MB/s 00:05:24.489 16:51:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.489 16:51:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:24.489 256+0 records in 00:05:24.489 256+0 records out 00:05:24.489 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013487 s, 77.7 MB/s 00:05:24.489 16:51:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:24.489 16:51:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:24.489 256+0 records in 00:05:24.489 256+0 records out 00:05:24.489 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.017725 s, 59.2 MB/s 00:05:24.489 16:51:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:24.490 16:51:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.490 16:51:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:24.490 16:51:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:24.490 16:51:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.490 16:51:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:24.490 16:51:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:24.490 16:51:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.490 16:51:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:24.490 16:51:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:24.490 16:51:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:24.490 16:51:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:24.490 16:51:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:24.490 16:51:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.490 16:51:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.490 16:51:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:24.490 16:51:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:24.490 16:51:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.490 16:51:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:24.751 16:51:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:24.751 16:51:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:24.751 16:51:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:24.751 16:51:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.751 16:51:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.751 16:51:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:24.751 16:51:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.751 16:51:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.751 16:51:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.751 16:51:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:25.012 16:51:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:25.013 16:51:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:25.013 16:51:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:25.013 16:51:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.013 16:51:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.013 16:51:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:25.013 16:51:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.013 16:51:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.013 16:51:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.013 16:51:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.013 16:51:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.274 16:51:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:25.274 16:51:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:25.274 16:51:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.274 16:51:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:25.274 16:51:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:25.274 16:51:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.274 16:51:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:25.274 16:51:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:25.274 16:51:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:25.274 16:51:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:25.274 16:51:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:25.274 16:51:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:25.274 16:51:59 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:25.534 16:51:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:26.102 [2024-12-05 16:52:00.343048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.102 [2024-12-05 16:52:00.419172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.102 [2024-12-05 16:52:00.419173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.363 [2024-12-05 16:52:00.517929] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:26.363 [2024-12-05 16:52:00.517982] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:28.900 16:52:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:28.900 spdk_app_start Round 2 00:05:28.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.900 16:52:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:28.900 16:52:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58363 /var/tmp/spdk-nbd.sock 00:05:28.900 16:52:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58363 ']' 00:05:28.900 16:52:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.900 16:52:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.900 16:52:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.900 16:52:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.900 16:52:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.900 16:52:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.900 16:52:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:28.900 16:52:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.900 Malloc0 00:05:28.900 16:52:03 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:29.159 Malloc1 00:05:29.159 16:52:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.159 16:52:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.159 16:52:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.159 16:52:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:29.159 16:52:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.159 16:52:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:29.159 16:52:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:29.159 16:52:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.159 16:52:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:29.159 16:52:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:29.159 16:52:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.159 16:52:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:29.159 16:52:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:29.159 16:52:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:29.159 16:52:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.159 16:52:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:29.417 /dev/nbd0 00:05:29.417 16:52:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:29.417 16:52:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:29.417 16:52:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:29.417 16:52:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:29.417 16:52:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:29.417 16:52:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:29.417 16:52:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:29.417 16:52:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:29.417 16:52:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:29.417 16:52:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:29.417 16:52:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.417 1+0 records in 00:05:29.417 1+0 records out 00:05:29.417 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394671 s, 10.4 MB/s 00:05:29.417 16:52:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.417 16:52:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:29.417 16:52:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.417 16:52:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:29.417 16:52:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:29.417 16:52:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.417 16:52:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.417 16:52:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:29.676 /dev/nbd1 00:05:29.676 16:52:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:29.676 16:52:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:29.676 16:52:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:29.676 16:52:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:29.676 16:52:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:29.676 16:52:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:29.676 16:52:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:29.676 16:52:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:29.676 16:52:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:29.676 16:52:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:29.676 16:52:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.676 1+0 records in 00:05:29.676 1+0 records out 00:05:29.676 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264115 s, 15.5 MB/s 00:05:29.676 16:52:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.676 16:52:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:29.676 16:52:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.676 16:52:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:29.676 16:52:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:29.676 16:52:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.676 16:52:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.676 16:52:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.676 16:52:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.676 16:52:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.943 16:52:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:29.944 { 00:05:29.944 "nbd_device": "/dev/nbd0", 00:05:29.944 "bdev_name": "Malloc0" 00:05:29.944 }, 00:05:29.944 { 00:05:29.944 "nbd_device": "/dev/nbd1", 00:05:29.944 "bdev_name": "Malloc1" 00:05:29.944 } 00:05:29.944 ]' 00:05:29.944 16:52:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.944 16:52:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:29.944 { 00:05:29.944 "nbd_device": "/dev/nbd0", 00:05:29.944 "bdev_name": "Malloc0" 00:05:29.944 }, 00:05:29.944 { 00:05:29.944 "nbd_device": "/dev/nbd1", 00:05:29.944 "bdev_name": "Malloc1" 00:05:29.944 } 00:05:29.944 ]' 00:05:29.944 16:52:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:29.944 /dev/nbd1' 00:05:29.944 16:52:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:29.944 /dev/nbd1' 00:05:29.944 16:52:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.944 16:52:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:29.944 16:52:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:29.944 16:52:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:29.944 16:52:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:29.944 16:52:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:29.944 16:52:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.944 16:52:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.944 16:52:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:29.944 16:52:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:29.944 16:52:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:29.944 16:52:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:29.944 256+0 records in 00:05:29.944 256+0 records out 00:05:29.944 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0069047 s, 152 MB/s 00:05:29.945 16:52:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.945 16:52:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:29.945 256+0 records in 00:05:29.945 256+0 records out 00:05:29.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148721 s, 70.5 MB/s 00:05:29.945 16:52:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.945 16:52:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:29.945 256+0 records in 00:05:29.945 256+0 records out 00:05:29.945 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163173 s, 64.3 MB/s 00:05:29.945 16:52:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:29.945 16:52:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.945 16:52:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.945 16:52:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:29.945 16:52:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:29.945 16:52:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:29.945 16:52:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:29.945 16:52:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.945 16:52:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:29.945 16:52:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.945 16:52:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:29.945 16:52:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:29.945 16:52:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:29.945 16:52:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.945 16:52:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.945 16:52:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:29.945 16:52:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:29.945 16:52:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.946 16:52:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:30.207 16:52:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:30.207 16:52:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:30.207 16:52:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:30.207 16:52:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.207 16:52:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.207 16:52:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:30.207 16:52:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:30.207 16:52:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.207 16:52:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:30.208 16:52:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:30.466 16:52:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:30.466 16:52:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:30.466 16:52:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:30.466 16:52:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.466 16:52:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.466 16:52:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:30.466 16:52:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:30.466 16:52:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.466 16:52:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.466 16:52:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.466 16:52:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.724 16:52:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:30.724 16:52:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.724 16:52:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:30.724 16:52:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:30.724 16:52:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:30.724 16:52:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.724 16:52:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:30.724 16:52:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:30.724 16:52:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:30.724 16:52:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:30.724 16:52:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:30.724 16:52:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:30.724 16:52:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:30.982 16:52:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:31.552 [2024-12-05 16:52:05.733785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.552 [2024-12-05 16:52:05.812061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.552 [2024-12-05 16:52:05.812174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.552 [2024-12-05 16:52:05.908957] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:31.552 [2024-12-05 16:52:05.909029] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:34.090 16:52:08 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58363 /var/tmp/spdk-nbd.sock 00:05:34.090 16:52:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58363 ']' 00:05:34.090 16:52:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:34.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:34.090 16:52:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.090 16:52:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:34.090 16:52:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.090 16:52:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.090 16:52:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.090 16:52:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:34.090 16:52:08 event.app_repeat -- event/event.sh@39 -- # killprocess 58363 00:05:34.090 16:52:08 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58363 ']' 00:05:34.090 16:52:08 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58363 00:05:34.090 16:52:08 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:34.090 16:52:08 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.090 16:52:08 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58363 00:05:34.090 killing process with pid 58363 00:05:34.090 16:52:08 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.090 16:52:08 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.090 16:52:08 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58363' 00:05:34.090 16:52:08 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58363 00:05:34.090 16:52:08 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58363 00:05:34.656 spdk_app_start is called in Round 0. 00:05:34.656 Shutdown signal received, stop current app iteration 00:05:34.656 Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 reinitialization... 00:05:34.656 spdk_app_start is called in Round 1. 00:05:34.656 Shutdown signal received, stop current app iteration 00:05:34.656 Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 reinitialization... 00:05:34.656 spdk_app_start is called in Round 2. 00:05:34.656 Shutdown signal received, stop current app iteration 00:05:34.656 Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 reinitialization... 00:05:34.656 spdk_app_start is called in Round 3. 00:05:34.656 Shutdown signal received, stop current app iteration 00:05:34.656 16:52:08 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:34.656 16:52:08 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:34.656 00:05:34.656 real 0m17.568s 00:05:34.656 user 0m38.483s 00:05:34.656 sys 0m2.060s 00:05:34.656 ************************************ 00:05:34.656 END TEST app_repeat 00:05:34.656 16:52:08 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.656 16:52:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:34.656 ************************************ 00:05:34.656 16:52:08 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:34.656 16:52:08 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:34.656 16:52:08 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.656 16:52:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.656 16:52:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.656 ************************************ 00:05:34.656 START TEST cpu_locks 00:05:34.656 ************************************ 00:05:34.656 16:52:08 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:34.915 * Looking for test storage... 00:05:34.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:34.915 16:52:09 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:34.915 16:52:09 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:34.915 16:52:09 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:34.915 16:52:09 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.915 16:52:09 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:34.915 16:52:09 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.915 16:52:09 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:34.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.915 --rc genhtml_branch_coverage=1 00:05:34.915 --rc genhtml_function_coverage=1 00:05:34.915 --rc genhtml_legend=1 00:05:34.915 --rc geninfo_all_blocks=1 00:05:34.915 --rc geninfo_unexecuted_blocks=1 00:05:34.915 00:05:34.915 ' 00:05:34.915 16:52:09 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:34.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.915 --rc genhtml_branch_coverage=1 00:05:34.915 --rc genhtml_function_coverage=1 00:05:34.915 --rc genhtml_legend=1 00:05:34.915 --rc geninfo_all_blocks=1 00:05:34.915 --rc geninfo_unexecuted_blocks=1 00:05:34.915 00:05:34.915 ' 00:05:34.915 16:52:09 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:34.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.915 --rc genhtml_branch_coverage=1 00:05:34.915 --rc genhtml_function_coverage=1 00:05:34.915 --rc genhtml_legend=1 00:05:34.915 --rc geninfo_all_blocks=1 00:05:34.915 --rc geninfo_unexecuted_blocks=1 00:05:34.915 00:05:34.915 ' 00:05:34.915 16:52:09 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:34.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.915 --rc genhtml_branch_coverage=1 00:05:34.915 --rc genhtml_function_coverage=1 00:05:34.915 --rc genhtml_legend=1 00:05:34.915 --rc geninfo_all_blocks=1 00:05:34.915 --rc geninfo_unexecuted_blocks=1 00:05:34.915 00:05:34.915 ' 00:05:34.915 16:52:09 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:34.915 16:52:09 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:34.915 16:52:09 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:34.915 16:52:09 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:34.915 16:52:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.915 16:52:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.915 16:52:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.915 ************************************ 00:05:34.915 START TEST default_locks 00:05:34.915 ************************************ 00:05:34.915 16:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:34.915 16:52:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58794 00:05:34.915 16:52:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58794 00:05:34.915 16:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58794 ']' 00:05:34.915 16:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.915 16:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.915 16:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.915 16:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.915 16:52:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.915 16:52:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.915 [2024-12-05 16:52:09.179585] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:34.916 [2024-12-05 16:52:09.179706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58794 ] 00:05:35.174 [2024-12-05 16:52:09.334589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.174 [2024-12-05 16:52:09.414707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.739 16:52:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.739 16:52:10 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:35.739 16:52:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58794 00:05:35.739 16:52:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58794 00:05:35.739 16:52:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.997 16:52:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58794 00:05:35.997 16:52:10 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58794 ']' 00:05:35.997 16:52:10 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58794 00:05:35.997 16:52:10 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:35.997 16:52:10 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.997 16:52:10 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58794 00:05:35.997 16:52:10 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.997 16:52:10 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.997 killing process with pid 58794 00:05:35.997 16:52:10 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58794' 00:05:35.997 16:52:10 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58794 00:05:35.997 16:52:10 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58794 00:05:37.400 16:52:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58794 00:05:37.400 16:52:11 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:37.400 16:52:11 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58794 00:05:37.400 16:52:11 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:37.400 16:52:11 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.400 16:52:11 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:37.400 16:52:11 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:37.400 16:52:11 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58794 00:05:37.400 16:52:11 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58794 ']' 00:05:37.400 16:52:11 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.400 16:52:11 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.400 16:52:11 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.400 16:52:11 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.400 16:52:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.400 ERROR: process (pid: 58794) is no longer running 00:05:37.400 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58794) - No such process 00:05:37.400 16:52:11 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.400 16:52:11 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:37.400 16:52:11 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:37.400 16:52:11 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:37.400 16:52:11 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:37.400 16:52:11 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:37.400 16:52:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:37.400 16:52:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:37.400 16:52:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:37.400 16:52:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:37.400 00:05:37.400 real 0m2.330s 00:05:37.400 user 0m2.342s 00:05:37.400 sys 0m0.442s 00:05:37.400 16:52:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.400 ************************************ 00:05:37.400 16:52:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.400 END TEST default_locks 00:05:37.400 ************************************ 00:05:37.400 16:52:11 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:37.400 16:52:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.400 16:52:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.400 16:52:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.400 ************************************ 00:05:37.400 START TEST default_locks_via_rpc 00:05:37.400 ************************************ 00:05:37.401 16:52:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:37.401 16:52:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58847 00:05:37.401 16:52:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58847 00:05:37.401 16:52:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58847 ']' 00:05:37.401 16:52:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.401 16:52:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.401 16:52:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.401 16:52:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.401 16:52:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.401 16:52:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.401 [2024-12-05 16:52:11.547730] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:37.401 [2024-12-05 16:52:11.547825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58847 ] 00:05:37.401 [2024-12-05 16:52:11.695490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.660 [2024-12-05 16:52:11.775306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.231 16:52:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.231 16:52:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:38.231 16:52:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:38.231 16:52:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.231 16:52:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.231 16:52:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.231 16:52:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:38.231 16:52:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:38.231 16:52:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:38.231 16:52:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:38.231 16:52:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:38.231 16:52:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.231 16:52:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.231 16:52:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.231 16:52:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58847 00:05:38.231 16:52:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58847 00:05:38.231 16:52:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.231 16:52:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58847 00:05:38.231 16:52:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58847 ']' 00:05:38.231 16:52:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58847 00:05:38.231 16:52:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:38.231 16:52:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.492 16:52:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58847 00:05:38.492 16:52:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.492 16:52:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.492 killing process with pid 58847 00:05:38.492 16:52:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58847' 00:05:38.492 16:52:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58847 00:05:38.492 16:52:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58847 00:05:39.874 00:05:39.874 real 0m2.314s 00:05:39.874 user 0m2.297s 00:05:39.874 sys 0m0.434s 00:05:39.874 16:52:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.874 16:52:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.874 ************************************ 00:05:39.874 END TEST default_locks_via_rpc 00:05:39.874 ************************************ 00:05:39.874 16:52:13 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:39.874 16:52:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.874 16:52:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.874 16:52:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.874 ************************************ 00:05:39.874 START TEST non_locking_app_on_locked_coremask 00:05:39.874 ************************************ 00:05:39.874 16:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:39.874 16:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58904 00:05:39.874 16:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58904 /var/tmp/spdk.sock 00:05:39.874 16:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58904 ']' 00:05:39.874 16:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.874 16:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.874 16:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.874 16:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.874 16:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:39.874 16:52:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.874 [2024-12-05 16:52:13.917048] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:39.874 [2024-12-05 16:52:13.917132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58904 ] 00:05:39.874 [2024-12-05 16:52:14.066503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.874 [2024-12-05 16:52:14.145500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.469 16:52:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.469 16:52:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:40.469 16:52:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58915 00:05:40.469 16:52:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58915 /var/tmp/spdk2.sock 00:05:40.469 16:52:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58915 ']' 00:05:40.469 16:52:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.469 16:52:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:40.469 16:52:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.469 16:52:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.469 16:52:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.469 16:52:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:40.469 [2024-12-05 16:52:14.818097] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:40.469 [2024-12-05 16:52:14.818190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58915 ] 00:05:40.728 [2024-12-05 16:52:14.974476] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:40.728 [2024-12-05 16:52:14.974512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.988 [2024-12-05 16:52:15.128918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.929 16:52:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.930 16:52:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:41.930 16:52:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58904 00:05:41.930 16:52:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58904 00:05:41.930 16:52:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.190 16:52:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58904 00:05:42.190 16:52:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58904 ']' 00:05:42.190 16:52:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58904 00:05:42.190 16:52:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:42.190 16:52:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.190 16:52:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58904 00:05:42.190 killing process with pid 58904 00:05:42.190 16:52:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.190 16:52:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.190 16:52:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58904' 00:05:42.190 16:52:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58904 00:05:42.190 16:52:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58904 00:05:44.722 16:52:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58915 00:05:44.722 16:52:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58915 ']' 00:05:44.722 16:52:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58915 00:05:44.722 16:52:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:44.722 16:52:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:44.722 16:52:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58915 00:05:44.722 killing process with pid 58915 00:05:44.722 16:52:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.722 16:52:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.722 16:52:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58915' 00:05:44.722 16:52:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58915 00:05:44.722 16:52:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58915 00:05:45.659 00:05:45.659 real 0m6.106s 00:05:45.659 user 0m6.321s 00:05:45.659 sys 0m0.791s 00:05:45.659 ************************************ 00:05:45.659 END TEST non_locking_app_on_locked_coremask 00:05:45.659 ************************************ 00:05:45.659 16:52:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.659 16:52:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.659 16:52:20 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:45.659 16:52:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.659 16:52:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.659 16:52:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.659 ************************************ 00:05:45.659 START TEST locking_app_on_unlocked_coremask 00:05:45.659 ************************************ 00:05:45.659 16:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:45.659 16:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59016 00:05:45.659 16:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:45.659 16:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59016 /var/tmp/spdk.sock 00:05:45.659 16:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59016 ']' 00:05:45.659 16:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.659 16:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.659 16:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.659 16:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.659 16:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:45.918 [2024-12-05 16:52:20.085640] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:45.918 [2024-12-05 16:52:20.085945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59016 ] 00:05:45.918 [2024-12-05 16:52:20.242456] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:45.918 [2024-12-05 16:52:20.242583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.178 [2024-12-05 16:52:20.318817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.748 16:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.748 16:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:46.748 16:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:46.748 16:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59022 00:05:46.748 16:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59022 /var/tmp/spdk2.sock 00:05:46.748 16:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59022 ']' 00:05:46.749 16:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.749 16:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.749 16:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.749 16:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.749 16:52:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.749 [2024-12-05 16:52:20.981032] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:46.749 [2024-12-05 16:52:20.981312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59022 ] 00:05:47.009 [2024-12-05 16:52:21.144533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.009 [2024-12-05 16:52:21.298147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.948 16:52:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.948 16:52:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:47.948 16:52:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59022 00:05:47.948 16:52:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.948 16:52:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59022 00:05:48.209 16:52:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59016 00:05:48.209 16:52:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59016 ']' 00:05:48.209 16:52:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59016 00:05:48.209 16:52:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:48.209 16:52:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.209 16:52:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59016 00:05:48.209 killing process with pid 59016 00:05:48.209 16:52:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.209 16:52:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.209 16:52:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59016' 00:05:48.209 16:52:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59016 00:05:48.209 16:52:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59016 00:05:50.777 16:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59022 00:05:50.777 16:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59022 ']' 00:05:50.777 16:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59022 00:05:50.777 16:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:50.777 16:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.777 16:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59022 00:05:50.777 killing process with pid 59022 00:05:50.777 16:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.777 16:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.777 16:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59022' 00:05:50.777 16:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59022 00:05:50.777 16:52:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59022 00:05:52.156 00:05:52.156 real 0m6.095s 00:05:52.156 user 0m6.336s 00:05:52.156 sys 0m0.814s 00:05:52.156 16:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.156 ************************************ 00:05:52.156 16:52:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.156 END TEST locking_app_on_unlocked_coremask 00:05:52.156 ************************************ 00:05:52.156 16:52:26 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:52.156 16:52:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.156 16:52:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.156 16:52:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.156 ************************************ 00:05:52.156 START TEST locking_app_on_locked_coremask 00:05:52.156 ************************************ 00:05:52.156 16:52:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:52.156 16:52:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59119 00:05:52.156 16:52:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59119 /var/tmp/spdk.sock 00:05:52.156 16:52:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59119 ']' 00:05:52.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.156 16:52:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.156 16:52:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.156 16:52:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.156 16:52:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.156 16:52:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.156 16:52:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.156 [2024-12-05 16:52:26.228022] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:52.156 [2024-12-05 16:52:26.228118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59119 ] 00:05:52.156 [2024-12-05 16:52:26.378613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.156 [2024-12-05 16:52:26.460315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.727 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.727 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:52.727 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:52.727 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59129 00:05:52.727 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59129 /var/tmp/spdk2.sock 00:05:52.727 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:52.727 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59129 /var/tmp/spdk2.sock 00:05:52.727 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:52.727 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.727 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:52.727 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.727 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59129 /var/tmp/spdk2.sock 00:05:52.727 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59129 ']' 00:05:52.727 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:52.727 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.727 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:52.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:52.727 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.727 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.987 [2024-12-05 16:52:27.144579] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:52.987 [2024-12-05 16:52:27.144899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59129 ] 00:05:52.987 [2024-12-05 16:52:27.313454] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59119 has claimed it. 00:05:52.987 [2024-12-05 16:52:27.313509] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:53.558 ERROR: process (pid: 59129) is no longer running 00:05:53.558 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59129) - No such process 00:05:53.558 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.558 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:53.558 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:53.558 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:53.558 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:53.558 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:53.558 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59119 00:05:53.558 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59119 00:05:53.558 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.819 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59119 00:05:53.819 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59119 ']' 00:05:53.819 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59119 00:05:53.819 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:53.819 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.819 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59119 00:05:53.819 killing process with pid 59119 00:05:53.820 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.820 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.820 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59119' 00:05:53.820 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59119 00:05:53.820 16:52:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59119 00:05:55.206 00:05:55.206 real 0m3.032s 00:05:55.206 user 0m3.276s 00:05:55.206 sys 0m0.514s 00:05:55.206 16:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.206 16:52:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.206 ************************************ 00:05:55.206 END TEST locking_app_on_locked_coremask 00:05:55.206 ************************************ 00:05:55.206 16:52:29 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:55.206 16:52:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.206 16:52:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.206 16:52:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.206 ************************************ 00:05:55.206 START TEST locking_overlapped_coremask 00:05:55.206 ************************************ 00:05:55.206 16:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:55.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.206 16:52:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59188 00:05:55.206 16:52:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59188 /var/tmp/spdk.sock 00:05:55.206 16:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59188 ']' 00:05:55.206 16:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.206 16:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.206 16:52:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:55.206 16:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.206 16:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.206 16:52:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.206 [2024-12-05 16:52:29.321381] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:55.206 [2024-12-05 16:52:29.321518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59188 ] 00:05:55.206 [2024-12-05 16:52:29.479239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:55.466 [2024-12-05 16:52:29.583917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.466 [2024-12-05 16:52:29.584414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.466 [2024-12-05 16:52:29.584609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.041 16:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.041 16:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:56.041 16:52:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59206 00:05:56.041 16:52:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59206 /var/tmp/spdk2.sock 00:05:56.041 16:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:56.041 16:52:30 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:56.041 16:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59206 /var/tmp/spdk2.sock 00:05:56.041 16:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:56.041 16:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.041 16:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:56.041 16:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:56.041 16:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59206 /var/tmp/spdk2.sock 00:05:56.041 16:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59206 ']' 00:05:56.041 16:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:56.041 16:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:56.041 16:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:56.041 16:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.041 16:52:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.041 [2024-12-05 16:52:30.363079] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:56.041 [2024-12-05 16:52:30.363468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59206 ] 00:05:56.299 [2024-12-05 16:52:30.543888] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59188 has claimed it. 00:05:56.299 [2024-12-05 16:52:30.547982] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:56.864 ERROR: process (pid: 59206) is no longer running 00:05:56.864 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59206) - No such process 00:05:56.864 16:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.864 16:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:56.864 16:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:56.864 16:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:56.864 16:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:56.864 16:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:56.864 16:52:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:56.864 16:52:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:56.864 16:52:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:56.864 16:52:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:56.864 16:52:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59188 00:05:56.864 16:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59188 ']' 00:05:56.864 16:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59188 00:05:56.864 16:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:56.864 16:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.864 16:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59188 00:05:56.864 16:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.864 16:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.864 16:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59188' 00:05:56.864 killing process with pid 59188 00:05:56.864 16:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59188 00:05:56.864 16:52:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59188 00:05:58.259 00:05:58.259 real 0m2.977s 00:05:58.259 user 0m8.090s 00:05:58.259 sys 0m0.528s 00:05:58.259 16:52:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.259 16:52:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.259 ************************************ 00:05:58.259 END TEST locking_overlapped_coremask 00:05:58.259 ************************************ 00:05:58.259 16:52:32 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:58.259 16:52:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.259 16:52:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.259 16:52:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.259 ************************************ 00:05:58.259 START TEST locking_overlapped_coremask_via_rpc 00:05:58.259 ************************************ 00:05:58.259 16:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:58.259 16:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59259 00:05:58.259 16:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59259 /var/tmp/spdk.sock 00:05:58.259 16:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59259 ']' 00:05:58.259 16:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:58.259 16:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.259 16:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.259 16:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.259 16:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.259 16:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.259 [2024-12-05 16:52:32.371882] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:58.259 [2024-12-05 16:52:32.372017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59259 ] 00:05:58.259 [2024-12-05 16:52:32.533559] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.259 [2024-12-05 16:52:32.533605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.519 [2024-12-05 16:52:32.639942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.519 [2024-12-05 16:52:32.640140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.519 [2024-12-05 16:52:32.640222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.089 16:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.089 16:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:59.089 16:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59271 00:05:59.089 16:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59271 /var/tmp/spdk2.sock 00:05:59.089 16:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59271 ']' 00:05:59.089 16:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.089 16:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:59.089 16:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.089 16:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.089 16:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.089 16:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.089 [2024-12-05 16:52:33.298240] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:05:59.089 [2024-12-05 16:52:33.298644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59271 ] 00:05:59.350 [2024-12-05 16:52:33.465287] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.350 [2024-12-05 16:52:33.465323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.350 [2024-12-05 16:52:33.625244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:59.350 [2024-12-05 16:52:33.629044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.350 [2024-12-05 16:52:33.629074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.739 [2024-12-05 16:52:34.825104] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59259 has claimed it. 00:06:00.739 request: 00:06:00.739 { 00:06:00.739 "method": "framework_enable_cpumask_locks", 00:06:00.739 "req_id": 1 00:06:00.739 } 00:06:00.739 Got JSON-RPC error response 00:06:00.739 response: 00:06:00.739 { 00:06:00.739 "code": -32603, 00:06:00.739 "message": "Failed to claim CPU core: 2" 00:06:00.739 } 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59259 /var/tmp/spdk.sock 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59259 ']' 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59271 /var/tmp/spdk2.sock 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59271 ']' 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.739 16:52:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.000 16:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.000 16:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:01.000 16:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:01.000 16:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:01.000 16:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:01.000 16:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:01.000 00:06:01.000 real 0m2.903s 00:06:01.000 user 0m1.016s 00:06:01.000 sys 0m0.130s 00:06:01.000 16:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.000 16:52:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.000 ************************************ 00:06:01.000 END TEST locking_overlapped_coremask_via_rpc 00:06:01.000 ************************************ 00:06:01.000 16:52:35 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:01.000 16:52:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59259 ]] 00:06:01.000 16:52:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59259 00:06:01.000 16:52:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59259 ']' 00:06:01.000 16:52:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59259 00:06:01.000 16:52:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:01.000 16:52:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.000 16:52:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59259 00:06:01.000 16:52:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.000 16:52:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.000 16:52:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59259' 00:06:01.000 killing process with pid 59259 00:06:01.000 16:52:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59259 00:06:01.000 16:52:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59259 00:06:02.380 16:52:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59271 ]] 00:06:02.380 16:52:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59271 00:06:02.380 16:52:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59271 ']' 00:06:02.380 16:52:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59271 00:06:02.380 16:52:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:02.380 16:52:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.380 16:52:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59271 00:06:02.380 16:52:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:02.380 16:52:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:02.380 16:52:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59271' 00:06:02.380 killing process with pid 59271 00:06:02.380 16:52:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59271 00:06:02.380 16:52:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59271 00:06:03.756 16:52:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:03.756 16:52:37 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:03.756 16:52:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59259 ]] 00:06:03.756 16:52:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59259 00:06:03.756 16:52:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59259 ']' 00:06:03.756 16:52:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59259 00:06:03.756 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59259) - No such process 00:06:03.756 Process with pid 59259 is not found 00:06:03.756 16:52:37 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59259 is not found' 00:06:03.756 16:52:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59271 ]] 00:06:03.756 16:52:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59271 00:06:03.756 16:52:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59271 ']' 00:06:03.756 16:52:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59271 00:06:03.756 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59271) - No such process 00:06:03.756 Process with pid 59271 is not found 00:06:03.756 16:52:37 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59271 is not found' 00:06:03.756 16:52:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:03.756 00:06:03.756 real 0m28.879s 00:06:03.756 user 0m50.287s 00:06:03.756 sys 0m4.496s 00:06:03.756 16:52:37 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.756 ************************************ 00:06:03.756 END TEST cpu_locks 00:06:03.756 ************************************ 00:06:03.756 16:52:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.756 00:06:03.756 real 0m53.930s 00:06:03.756 user 1m41.134s 00:06:03.756 sys 0m7.342s 00:06:03.756 16:52:37 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.756 16:52:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.756 ************************************ 00:06:03.756 END TEST event 00:06:03.756 ************************************ 00:06:03.756 16:52:37 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:03.756 16:52:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.756 16:52:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.756 16:52:37 -- common/autotest_common.sh@10 -- # set +x 00:06:03.756 ************************************ 00:06:03.756 START TEST thread 00:06:03.756 ************************************ 00:06:03.756 16:52:37 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:03.756 * Looking for test storage... 00:06:03.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:03.757 16:52:37 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:03.757 16:52:37 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:03.757 16:52:37 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:03.757 16:52:38 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:03.757 16:52:38 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.757 16:52:38 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.757 16:52:38 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.757 16:52:38 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.757 16:52:38 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.757 16:52:38 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.757 16:52:38 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.757 16:52:38 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.757 16:52:38 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.757 16:52:38 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.757 16:52:38 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.757 16:52:38 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:03.757 16:52:38 thread -- scripts/common.sh@345 -- # : 1 00:06:03.757 16:52:38 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.757 16:52:38 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.757 16:52:38 thread -- scripts/common.sh@365 -- # decimal 1 00:06:03.757 16:52:38 thread -- scripts/common.sh@353 -- # local d=1 00:06:03.757 16:52:38 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.757 16:52:38 thread -- scripts/common.sh@355 -- # echo 1 00:06:03.757 16:52:38 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.757 16:52:38 thread -- scripts/common.sh@366 -- # decimal 2 00:06:03.757 16:52:38 thread -- scripts/common.sh@353 -- # local d=2 00:06:03.757 16:52:38 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.757 16:52:38 thread -- scripts/common.sh@355 -- # echo 2 00:06:03.757 16:52:38 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.757 16:52:38 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.757 16:52:38 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.757 16:52:38 thread -- scripts/common.sh@368 -- # return 0 00:06:03.757 16:52:38 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.757 16:52:38 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:03.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.757 --rc genhtml_branch_coverage=1 00:06:03.757 --rc genhtml_function_coverage=1 00:06:03.757 --rc genhtml_legend=1 00:06:03.757 --rc geninfo_all_blocks=1 00:06:03.757 --rc geninfo_unexecuted_blocks=1 00:06:03.757 00:06:03.757 ' 00:06:03.757 16:52:38 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:03.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.757 --rc genhtml_branch_coverage=1 00:06:03.757 --rc genhtml_function_coverage=1 00:06:03.757 --rc genhtml_legend=1 00:06:03.757 --rc geninfo_all_blocks=1 00:06:03.757 --rc geninfo_unexecuted_blocks=1 00:06:03.757 00:06:03.757 ' 00:06:03.757 16:52:38 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:03.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.757 --rc genhtml_branch_coverage=1 00:06:03.757 --rc genhtml_function_coverage=1 00:06:03.757 --rc genhtml_legend=1 00:06:03.757 --rc geninfo_all_blocks=1 00:06:03.757 --rc geninfo_unexecuted_blocks=1 00:06:03.757 00:06:03.757 ' 00:06:03.757 16:52:38 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:03.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.757 --rc genhtml_branch_coverage=1 00:06:03.757 --rc genhtml_function_coverage=1 00:06:03.757 --rc genhtml_legend=1 00:06:03.757 --rc geninfo_all_blocks=1 00:06:03.757 --rc geninfo_unexecuted_blocks=1 00:06:03.757 00:06:03.757 ' 00:06:03.757 16:52:38 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:03.757 16:52:38 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:03.757 16:52:38 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.757 16:52:38 thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.757 ************************************ 00:06:03.757 START TEST thread_poller_perf 00:06:03.757 ************************************ 00:06:03.757 16:52:38 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:03.757 [2024-12-05 16:52:38.100635] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:03.757 [2024-12-05 16:52:38.100722] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59431 ] 00:06:04.015 [2024-12-05 16:52:38.250605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.015 [2024-12-05 16:52:38.329424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.016 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:05.390 [2024-12-05T16:52:39.757Z] ====================================== 00:06:05.390 [2024-12-05T16:52:39.757Z] busy:2611179734 (cyc) 00:06:05.390 [2024-12-05T16:52:39.757Z] total_run_count: 394000 00:06:05.390 [2024-12-05T16:52:39.757Z] tsc_hz: 2600000000 (cyc) 00:06:05.390 [2024-12-05T16:52:39.757Z] ====================================== 00:06:05.390 [2024-12-05T16:52:39.757Z] poller_cost: 6627 (cyc), 2548 (nsec) 00:06:05.390 00:06:05.390 real 0m1.385s 00:06:05.390 user 0m1.221s 00:06:05.390 sys 0m0.057s 00:06:05.390 16:52:39 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.390 16:52:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:05.390 ************************************ 00:06:05.390 END TEST thread_poller_perf 00:06:05.390 ************************************ 00:06:05.390 16:52:39 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:05.390 16:52:39 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:05.390 16:52:39 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.390 16:52:39 thread -- common/autotest_common.sh@10 -- # set +x 00:06:05.390 ************************************ 00:06:05.390 START TEST thread_poller_perf 00:06:05.390 ************************************ 00:06:05.390 16:52:39 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:05.390 [2024-12-05 16:52:39.530682] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:05.390 [2024-12-05 16:52:39.531073] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59468 ] 00:06:05.390 [2024-12-05 16:52:39.686533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.647 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:05.647 [2024-12-05 16:52:39.764957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.585 [2024-12-05T16:52:40.952Z] ====================================== 00:06:06.585 [2024-12-05T16:52:40.952Z] busy:2602693570 (cyc) 00:06:06.585 [2024-12-05T16:52:40.952Z] total_run_count: 5217000 00:06:06.585 [2024-12-05T16:52:40.952Z] tsc_hz: 2600000000 (cyc) 00:06:06.585 [2024-12-05T16:52:40.952Z] ====================================== 00:06:06.585 [2024-12-05T16:52:40.952Z] poller_cost: 498 (cyc), 191 (nsec) 00:06:06.585 00:06:06.585 real 0m1.387s 00:06:06.585 user 0m1.210s 00:06:06.585 sys 0m0.070s 00:06:06.585 16:52:40 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.585 16:52:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:06.585 ************************************ 00:06:06.585 END TEST thread_poller_perf 00:06:06.585 ************************************ 00:06:06.585 16:52:40 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:06.585 00:06:06.585 real 0m3.004s 00:06:06.585 user 0m2.545s 00:06:06.585 sys 0m0.242s 00:06:06.585 16:52:40 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.585 16:52:40 thread -- common/autotest_common.sh@10 -- # set +x 00:06:06.585 ************************************ 00:06:06.585 END TEST thread 00:06:06.585 ************************************ 00:06:06.585 16:52:40 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:06.585 16:52:40 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:06.585 16:52:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.585 16:52:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.585 16:52:40 -- common/autotest_common.sh@10 -- # set +x 00:06:06.844 ************************************ 00:06:06.844 START TEST app_cmdline 00:06:06.844 ************************************ 00:06:06.844 16:52:40 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:06.844 * Looking for test storage... 00:06:06.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:06.844 16:52:41 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:06.844 16:52:41 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:06.844 16:52:41 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:06.844 16:52:41 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:06.844 16:52:41 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.844 16:52:41 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.844 16:52:41 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.845 16:52:41 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.845 16:52:41 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.845 16:52:41 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.845 16:52:41 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.845 16:52:41 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.845 16:52:41 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.845 16:52:41 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.845 16:52:41 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.845 16:52:41 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:06.845 16:52:41 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:06.845 16:52:41 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.845 16:52:41 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.845 16:52:41 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:06.845 16:52:41 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:06.845 16:52:41 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.845 16:52:41 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:06.845 16:52:41 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.845 16:52:41 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:06.845 16:52:41 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:06.845 16:52:41 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.845 16:52:41 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:06.845 16:52:41 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.845 16:52:41 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.845 16:52:41 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.845 16:52:41 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:06.845 16:52:41 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.845 16:52:41 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:06.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.845 --rc genhtml_branch_coverage=1 00:06:06.845 --rc genhtml_function_coverage=1 00:06:06.845 --rc genhtml_legend=1 00:06:06.845 --rc geninfo_all_blocks=1 00:06:06.845 --rc geninfo_unexecuted_blocks=1 00:06:06.845 00:06:06.845 ' 00:06:06.845 16:52:41 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:06.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.845 --rc genhtml_branch_coverage=1 00:06:06.845 --rc genhtml_function_coverage=1 00:06:06.845 --rc genhtml_legend=1 00:06:06.845 --rc geninfo_all_blocks=1 00:06:06.845 --rc geninfo_unexecuted_blocks=1 00:06:06.845 00:06:06.845 ' 00:06:06.845 16:52:41 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:06.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.845 --rc genhtml_branch_coverage=1 00:06:06.845 --rc genhtml_function_coverage=1 00:06:06.845 --rc genhtml_legend=1 00:06:06.845 --rc geninfo_all_blocks=1 00:06:06.845 --rc geninfo_unexecuted_blocks=1 00:06:06.845 00:06:06.845 ' 00:06:06.845 16:52:41 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:06.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.845 --rc genhtml_branch_coverage=1 00:06:06.845 --rc genhtml_function_coverage=1 00:06:06.845 --rc genhtml_legend=1 00:06:06.845 --rc geninfo_all_blocks=1 00:06:06.845 --rc geninfo_unexecuted_blocks=1 00:06:06.845 00:06:06.845 ' 00:06:06.845 16:52:41 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:06.845 16:52:41 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59551 00:06:06.845 16:52:41 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59551 00:06:06.845 16:52:41 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59551 ']' 00:06:06.845 16:52:41 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.845 16:52:41 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.845 16:52:41 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.845 16:52:41 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.845 16:52:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:06.845 16:52:41 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:06.845 [2024-12-05 16:52:41.156971] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:06.845 [2024-12-05 16:52:41.157095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59551 ] 00:06:07.103 [2024-12-05 16:52:41.313679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.103 [2024-12-05 16:52:41.388931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.675 16:52:41 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.675 16:52:41 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:07.675 16:52:41 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:07.938 { 00:06:07.938 "version": "SPDK v25.01-pre git sha1 8d3947977", 00:06:07.938 "fields": { 00:06:07.938 "major": 25, 00:06:07.938 "minor": 1, 00:06:07.938 "patch": 0, 00:06:07.938 "suffix": "-pre", 00:06:07.938 "commit": "8d3947977" 00:06:07.938 } 00:06:07.938 } 00:06:07.938 16:52:42 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:07.938 16:52:42 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:07.938 16:52:42 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:07.938 16:52:42 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:07.938 16:52:42 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:07.938 16:52:42 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:07.938 16:52:42 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:07.938 16:52:42 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:07.938 16:52:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:07.938 16:52:42 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:07.938 16:52:42 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:07.938 16:52:42 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:07.938 16:52:42 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:07.938 16:52:42 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:07.938 16:52:42 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:07.938 16:52:42 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:07.938 16:52:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.938 16:52:42 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:07.938 16:52:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.938 16:52:42 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:07.938 16:52:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.938 16:52:42 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:07.938 16:52:42 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:07.938 16:52:42 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:08.199 request: 00:06:08.199 { 00:06:08.199 "method": "env_dpdk_get_mem_stats", 00:06:08.199 "req_id": 1 00:06:08.199 } 00:06:08.199 Got JSON-RPC error response 00:06:08.199 response: 00:06:08.199 { 00:06:08.199 "code": -32601, 00:06:08.199 "message": "Method not found" 00:06:08.199 } 00:06:08.199 16:52:42 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:08.199 16:52:42 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:08.199 16:52:42 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:08.199 16:52:42 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:08.199 16:52:42 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59551 00:06:08.199 16:52:42 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59551 ']' 00:06:08.199 16:52:42 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59551 00:06:08.199 16:52:42 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:08.199 16:52:42 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.199 16:52:42 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59551 00:06:08.199 16:52:42 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.199 16:52:42 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.199 killing process with pid 59551 00:06:08.199 16:52:42 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59551' 00:06:08.199 16:52:42 app_cmdline -- common/autotest_common.sh@973 -- # kill 59551 00:06:08.199 16:52:42 app_cmdline -- common/autotest_common.sh@978 -- # wait 59551 00:06:09.575 00:06:09.575 real 0m2.670s 00:06:09.575 user 0m2.999s 00:06:09.575 sys 0m0.398s 00:06:09.575 16:52:43 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.575 16:52:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:09.575 ************************************ 00:06:09.575 END TEST app_cmdline 00:06:09.575 ************************************ 00:06:09.575 16:52:43 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:09.575 16:52:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.575 16:52:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.575 16:52:43 -- common/autotest_common.sh@10 -- # set +x 00:06:09.575 ************************************ 00:06:09.575 START TEST version 00:06:09.575 ************************************ 00:06:09.575 16:52:43 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:09.575 * Looking for test storage... 00:06:09.575 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:09.575 16:52:43 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:09.575 16:52:43 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:09.575 16:52:43 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:09.575 16:52:43 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:09.575 16:52:43 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.575 16:52:43 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.575 16:52:43 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.575 16:52:43 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.575 16:52:43 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.575 16:52:43 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.575 16:52:43 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.575 16:52:43 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.575 16:52:43 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.575 16:52:43 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.575 16:52:43 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.575 16:52:43 version -- scripts/common.sh@344 -- # case "$op" in 00:06:09.575 16:52:43 version -- scripts/common.sh@345 -- # : 1 00:06:09.575 16:52:43 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.575 16:52:43 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.575 16:52:43 version -- scripts/common.sh@365 -- # decimal 1 00:06:09.575 16:52:43 version -- scripts/common.sh@353 -- # local d=1 00:06:09.575 16:52:43 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.575 16:52:43 version -- scripts/common.sh@355 -- # echo 1 00:06:09.575 16:52:43 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.575 16:52:43 version -- scripts/common.sh@366 -- # decimal 2 00:06:09.575 16:52:43 version -- scripts/common.sh@353 -- # local d=2 00:06:09.575 16:52:43 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.575 16:52:43 version -- scripts/common.sh@355 -- # echo 2 00:06:09.575 16:52:43 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.575 16:52:43 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.575 16:52:43 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.575 16:52:43 version -- scripts/common.sh@368 -- # return 0 00:06:09.575 16:52:43 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.575 16:52:43 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:09.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.575 --rc genhtml_branch_coverage=1 00:06:09.575 --rc genhtml_function_coverage=1 00:06:09.575 --rc genhtml_legend=1 00:06:09.575 --rc geninfo_all_blocks=1 00:06:09.575 --rc geninfo_unexecuted_blocks=1 00:06:09.575 00:06:09.575 ' 00:06:09.575 16:52:43 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:09.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.575 --rc genhtml_branch_coverage=1 00:06:09.575 --rc genhtml_function_coverage=1 00:06:09.575 --rc genhtml_legend=1 00:06:09.575 --rc geninfo_all_blocks=1 00:06:09.575 --rc geninfo_unexecuted_blocks=1 00:06:09.575 00:06:09.575 ' 00:06:09.575 16:52:43 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:09.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.575 --rc genhtml_branch_coverage=1 00:06:09.575 --rc genhtml_function_coverage=1 00:06:09.575 --rc genhtml_legend=1 00:06:09.575 --rc geninfo_all_blocks=1 00:06:09.575 --rc geninfo_unexecuted_blocks=1 00:06:09.575 00:06:09.575 ' 00:06:09.575 16:52:43 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:09.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.575 --rc genhtml_branch_coverage=1 00:06:09.575 --rc genhtml_function_coverage=1 00:06:09.575 --rc genhtml_legend=1 00:06:09.575 --rc geninfo_all_blocks=1 00:06:09.575 --rc geninfo_unexecuted_blocks=1 00:06:09.575 00:06:09.575 ' 00:06:09.575 16:52:43 version -- app/version.sh@17 -- # get_header_version major 00:06:09.575 16:52:43 version -- app/version.sh@14 -- # cut -f2 00:06:09.575 16:52:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:09.575 16:52:43 version -- app/version.sh@14 -- # tr -d '"' 00:06:09.575 16:52:43 version -- app/version.sh@17 -- # major=25 00:06:09.575 16:52:43 version -- app/version.sh@18 -- # get_header_version minor 00:06:09.575 16:52:43 version -- app/version.sh@14 -- # cut -f2 00:06:09.575 16:52:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:09.575 16:52:43 version -- app/version.sh@14 -- # tr -d '"' 00:06:09.575 16:52:43 version -- app/version.sh@18 -- # minor=1 00:06:09.575 16:52:43 version -- app/version.sh@19 -- # get_header_version patch 00:06:09.575 16:52:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:09.575 16:52:43 version -- app/version.sh@14 -- # tr -d '"' 00:06:09.575 16:52:43 version -- app/version.sh@14 -- # cut -f2 00:06:09.575 16:52:43 version -- app/version.sh@19 -- # patch=0 00:06:09.575 16:52:43 version -- app/version.sh@20 -- # get_header_version suffix 00:06:09.575 16:52:43 version -- app/version.sh@14 -- # tr -d '"' 00:06:09.575 16:52:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:09.575 16:52:43 version -- app/version.sh@14 -- # cut -f2 00:06:09.575 16:52:43 version -- app/version.sh@20 -- # suffix=-pre 00:06:09.575 16:52:43 version -- app/version.sh@22 -- # version=25.1 00:06:09.575 16:52:43 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:09.575 16:52:43 version -- app/version.sh@28 -- # version=25.1rc0 00:06:09.575 16:52:43 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:09.575 16:52:43 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:09.575 16:52:43 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:09.575 16:52:43 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:09.575 ************************************ 00:06:09.575 END TEST version 00:06:09.575 ************************************ 00:06:09.575 00:06:09.575 real 0m0.188s 00:06:09.575 user 0m0.117s 00:06:09.575 sys 0m0.092s 00:06:09.575 16:52:43 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.575 16:52:43 version -- common/autotest_common.sh@10 -- # set +x 00:06:09.575 16:52:43 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:09.575 16:52:43 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:09.575 16:52:43 -- spdk/autotest.sh@194 -- # uname -s 00:06:09.575 16:52:43 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:09.575 16:52:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:09.575 16:52:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:09.575 16:52:43 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:09.575 16:52:43 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:09.575 16:52:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:09.575 16:52:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.575 16:52:43 -- common/autotest_common.sh@10 -- # set +x 00:06:09.575 ************************************ 00:06:09.575 START TEST blockdev_nvme 00:06:09.575 ************************************ 00:06:09.575 16:52:43 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:09.833 * Looking for test storage... 00:06:09.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:09.833 16:52:43 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:09.833 16:52:43 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:09.833 16:52:43 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:06:09.833 16:52:44 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:09.833 16:52:44 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:09.833 16:52:44 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.833 16:52:44 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:09.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.833 --rc genhtml_branch_coverage=1 00:06:09.833 --rc genhtml_function_coverage=1 00:06:09.833 --rc genhtml_legend=1 00:06:09.833 --rc geninfo_all_blocks=1 00:06:09.833 --rc geninfo_unexecuted_blocks=1 00:06:09.833 00:06:09.833 ' 00:06:09.834 16:52:44 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:09.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.834 --rc genhtml_branch_coverage=1 00:06:09.834 --rc genhtml_function_coverage=1 00:06:09.834 --rc genhtml_legend=1 00:06:09.834 --rc geninfo_all_blocks=1 00:06:09.834 --rc geninfo_unexecuted_blocks=1 00:06:09.834 00:06:09.834 ' 00:06:09.834 16:52:44 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:09.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.834 --rc genhtml_branch_coverage=1 00:06:09.834 --rc genhtml_function_coverage=1 00:06:09.834 --rc genhtml_legend=1 00:06:09.834 --rc geninfo_all_blocks=1 00:06:09.834 --rc geninfo_unexecuted_blocks=1 00:06:09.834 00:06:09.834 ' 00:06:09.834 16:52:44 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:09.834 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.834 --rc genhtml_branch_coverage=1 00:06:09.834 --rc genhtml_function_coverage=1 00:06:09.834 --rc genhtml_legend=1 00:06:09.834 --rc geninfo_all_blocks=1 00:06:09.834 --rc geninfo_unexecuted_blocks=1 00:06:09.834 00:06:09.834 ' 00:06:09.834 16:52:44 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:09.834 16:52:44 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:09.834 16:52:44 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:09.834 16:52:44 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:09.834 16:52:44 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:09.834 16:52:44 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:09.834 16:52:44 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:09.834 16:52:44 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:09.834 16:52:44 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:09.834 16:52:44 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:09.834 16:52:44 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:09.834 16:52:44 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:09.834 16:52:44 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:06:09.834 16:52:44 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:09.834 16:52:44 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:09.834 16:52:44 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:06:09.834 16:52:44 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:09.834 16:52:44 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:06:09.834 16:52:44 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:09.834 16:52:44 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:09.834 16:52:44 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:09.834 16:52:44 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:06:09.834 16:52:44 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:06:09.834 16:52:44 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:09.834 16:52:44 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59723 00:06:09.834 16:52:44 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:09.834 16:52:44 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59723 00:06:09.834 16:52:44 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:09.834 16:52:44 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 59723 ']' 00:06:09.834 16:52:44 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.834 16:52:44 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.834 16:52:44 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.834 16:52:44 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.834 16:52:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:09.834 [2024-12-05 16:52:44.138247] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:09.834 [2024-12-05 16:52:44.138885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59723 ] 00:06:10.092 [2024-12-05 16:52:44.296274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.092 [2024-12-05 16:52:44.390822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.031 16:52:45 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.031 16:52:45 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:11.031 16:52:45 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:11.031 16:52:45 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:06:11.031 16:52:45 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:11.031 16:52:45 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:11.031 16:52:45 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:11.031 16:52:45 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:11.031 16:52:45 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.031 16:52:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.297 16:52:45 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.297 16:52:45 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:11.297 16:52:45 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.297 16:52:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.297 16:52:45 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.297 16:52:45 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:06:11.297 16:52:45 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:11.297 16:52:45 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.297 16:52:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.297 16:52:45 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.297 16:52:45 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:11.297 16:52:45 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.297 16:52:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.297 16:52:45 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.297 16:52:45 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:11.297 16:52:45 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.297 16:52:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.297 16:52:45 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.297 16:52:45 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:11.297 16:52:45 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:11.297 16:52:45 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:11.297 16:52:45 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.297 16:52:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:11.297 16:52:45 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.297 16:52:45 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:11.297 16:52:45 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:11.298 16:52:45 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "816ad04d-2f2c-4214-b1f9-cea4b1cbf812"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "816ad04d-2f2c-4214-b1f9-cea4b1cbf812",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "f74c42ee-81d9-4a59-990c-2e347424c5db"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f74c42ee-81d9-4a59-990c-2e347424c5db",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "a6ad19f2-b5f2-4fa0-bfca-e22f8eb3e053"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a6ad19f2-b5f2-4fa0-bfca-e22f8eb3e053",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "e49339e7-aa3a-4035-8ed2-c1e9869ad3e5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e49339e7-aa3a-4035-8ed2-c1e9869ad3e5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "f85b0388-30cf-45c3-997a-23205e43bb56"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f85b0388-30cf-45c3-997a-23205e43bb56",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "9fe26af1-308c-4646-a1a1-af86ee4fff44"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9fe26af1-308c-4646-a1a1-af86ee4fff44",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:11.298 16:52:45 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:11.298 16:52:45 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:11.298 16:52:45 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:11.298 16:52:45 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 59723 00:06:11.298 16:52:45 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 59723 ']' 00:06:11.298 16:52:45 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 59723 00:06:11.298 16:52:45 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:11.298 16:52:45 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.298 16:52:45 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59723 00:06:11.298 killing process with pid 59723 00:06:11.298 16:52:45 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.298 16:52:45 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.298 16:52:45 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59723' 00:06:11.298 16:52:45 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 59723 00:06:11.298 16:52:45 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 59723 00:06:13.204 16:52:47 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:13.204 16:52:47 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:13.204 16:52:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:13.204 16:52:47 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.204 16:52:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:13.204 ************************************ 00:06:13.204 START TEST bdev_hello_world 00:06:13.204 ************************************ 00:06:13.204 16:52:47 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:13.204 [2024-12-05 16:52:47.306921] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:13.204 [2024-12-05 16:52:47.307053] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59807 ] 00:06:13.204 [2024-12-05 16:52:47.466748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.204 [2024-12-05 16:52:47.563342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.770 [2024-12-05 16:52:48.104459] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:13.770 [2024-12-05 16:52:48.104622] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:13.770 [2024-12-05 16:52:48.104647] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:13.771 [2024-12-05 16:52:48.107053] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:13.771 [2024-12-05 16:52:48.107737] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:13.771 [2024-12-05 16:52:48.107791] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:13.771 [2024-12-05 16:52:48.108120] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:13.771 00:06:13.771 [2024-12-05 16:52:48.108144] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:14.704 00:06:14.704 real 0m1.503s 00:06:14.704 user 0m1.230s 00:06:14.704 sys 0m0.166s 00:06:14.704 16:52:48 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.704 16:52:48 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:14.705 ************************************ 00:06:14.705 END TEST bdev_hello_world 00:06:14.705 ************************************ 00:06:14.705 16:52:48 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:14.705 16:52:48 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:14.705 16:52:48 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.705 16:52:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:14.705 ************************************ 00:06:14.705 START TEST bdev_bounds 00:06:14.705 ************************************ 00:06:14.705 16:52:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:14.705 16:52:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59844 00:06:14.705 Process bdevio pid: 59844 00:06:14.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.705 16:52:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.705 16:52:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59844' 00:06:14.705 16:52:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59844 00:06:14.705 16:52:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 59844 ']' 00:06:14.705 16:52:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.705 16:52:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.705 16:52:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:14.705 16:52:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.705 16:52:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.705 16:52:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:14.705 [2024-12-05 16:52:48.870787] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:14.705 [2024-12-05 16:52:48.871066] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59844 ] 00:06:14.705 [2024-12-05 16:52:49.028039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:14.963 [2024-12-05 16:52:49.106552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.963 [2024-12-05 16:52:49.106777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.963 [2024-12-05 16:52:49.106777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.529 16:52:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.529 16:52:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:15.529 16:52:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:15.529 I/O targets: 00:06:15.529 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:15.529 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:15.529 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:15.529 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:15.529 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:15.529 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:15.529 00:06:15.529 00:06:15.529 CUnit - A unit testing framework for C - Version 2.1-3 00:06:15.529 http://cunit.sourceforge.net/ 00:06:15.529 00:06:15.529 00:06:15.529 Suite: bdevio tests on: Nvme3n1 00:06:15.529 Test: blockdev write read block ...passed 00:06:15.529 Test: blockdev write zeroes read block ...passed 00:06:15.529 Test: blockdev write zeroes read no split ...passed 00:06:15.529 Test: blockdev write zeroes read split ...passed 00:06:15.529 Test: blockdev write zeroes read split partial ...passed 00:06:15.529 Test: blockdev reset ...[2024-12-05 16:52:49.850818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:15.529 [2024-12-05 16:52:49.855395] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:06:15.529 Test: blockdev write read 8 blocks ...uccessful. 00:06:15.529 passed 00:06:15.529 Test: blockdev write read size > 128k ...passed 00:06:15.529 Test: blockdev write read invalid size ...passed 00:06:15.529 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:15.529 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:15.529 Test: blockdev write read max offset ...passed 00:06:15.529 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:15.529 Test: blockdev writev readv 8 blocks ...passed 00:06:15.529 Test: blockdev writev readv 30 x 1block ...passed 00:06:15.529 Test: blockdev writev readv block ...passed 00:06:15.529 Test: blockdev writev readv size > 128k ...passed 00:06:15.529 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:15.529 Test: blockdev comparev and writev ...[2024-12-05 16:52:49.869159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b6c0a000 len:0x1000 00:06:15.529 [2024-12-05 16:52:49.869309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:15.529 passed 00:06:15.529 Test: blockdev nvme passthru rw ...passed 00:06:15.529 Test: blockdev nvme passthru vendor specific ...passed 00:06:15.529 Test: blockdev nvme admin passthru ...[2024-12-05 16:52:49.870389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:15.529 [2024-12-05 16:52:49.870440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:15.529 passed 00:06:15.529 Test: blockdev copy ...passed 00:06:15.529 Suite: bdevio tests on: Nvme2n3 00:06:15.529 Test: blockdev write read block ...passed 00:06:15.529 Test: blockdev write zeroes read block ...passed 00:06:15.529 Test: blockdev write zeroes read no split ...passed 00:06:15.788 Test: blockdev write zeroes read split ...passed 00:06:15.788 Test: blockdev write zeroes read split partial ...passed 00:06:15.788 Test: blockdev reset ...[2024-12-05 16:52:49.915427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:15.788 passed 00:06:15.788 Test: blockdev write read 8 blocks ...[2024-12-05 16:52:49.919074] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:15.788 passed 00:06:15.788 Test: blockdev write read size > 128k ...passed 00:06:15.788 Test: blockdev write read invalid size ...passed 00:06:15.788 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:15.788 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:15.788 Test: blockdev write read max offset ...passed 00:06:15.788 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:15.788 Test: blockdev writev readv 8 blocks ...passed 00:06:15.788 Test: blockdev writev readv 30 x 1block ...passed 00:06:15.788 Test: blockdev writev readv block ...passed 00:06:15.788 Test: blockdev writev readv size > 128k ...passed 00:06:15.788 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:15.788 Test: blockdev comparev and writev ...[2024-12-05 16:52:49.926534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x299e06000 len:0x1000 00:06:15.788 [2024-12-05 16:52:49.926578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:15.788 passed 00:06:15.788 Test: blockdev nvme passthru rw ...passed 00:06:15.788 Test: blockdev nvme passthru vendor specific ...passed 00:06:15.788 Test: blockdev nvme admin passthru ...[2024-12-05 16:52:49.927422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:15.788 [2024-12-05 16:52:49.927451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:15.788 passed 00:06:15.788 Test: blockdev copy ...passed 00:06:15.788 Suite: bdevio tests on: Nvme2n2 00:06:15.788 Test: blockdev write read block ...passed 00:06:15.788 Test: blockdev write zeroes read block ...passed 00:06:15.788 Test: blockdev write zeroes read no split ...passed 00:06:15.788 Test: blockdev write zeroes read split ...passed 00:06:15.788 Test: blockdev write zeroes read split partial ...passed 00:06:15.788 Test: blockdev reset ...[2024-12-05 16:52:49.987144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:15.788 [2024-12-05 16:52:49.991685] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:15.788 Test: blockdev write read 8 blocks ...uccessful. 00:06:15.788 passed 00:06:15.788 Test: blockdev write read size > 128k ...passed 00:06:15.788 Test: blockdev write read invalid size ...passed 00:06:15.788 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:15.788 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:15.788 Test: blockdev write read max offset ...passed 00:06:15.788 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:15.788 Test: blockdev writev readv 8 blocks ...passed 00:06:15.788 Test: blockdev writev readv 30 x 1block ...passed 00:06:15.788 Test: blockdev writev readv block ...passed 00:06:15.788 Test: blockdev writev readv size > 128k ...passed 00:06:15.788 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:15.788 Test: blockdev comparev and writev ...[2024-12-05 16:52:50.009991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d5e3c000 len:0x1000 00:06:15.788 [2024-12-05 16:52:50.010031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:15.788 passed 00:06:15.788 Test: blockdev nvme passthru rw ...passed 00:06:15.788 Test: blockdev nvme passthru vendor specific ...[2024-12-05 16:52:50.012877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:06:15.788 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:06:15.788 [2024-12-05 16:52:50.013002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:15.788 passed 00:06:15.788 Test: blockdev copy ...passed 00:06:15.788 Suite: bdevio tests on: Nvme2n1 00:06:15.788 Test: blockdev write read block ...passed 00:06:15.788 Test: blockdev write zeroes read block ...passed 00:06:15.788 Test: blockdev write zeroes read no split ...passed 00:06:15.788 Test: blockdev write zeroes read split ...passed 00:06:15.788 Test: blockdev write zeroes read split partial ...passed 00:06:15.788 Test: blockdev reset ...[2024-12-05 16:52:50.070688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:15.788 [2024-12-05 16:52:50.074697] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:06:15.788 Test: blockdev write read 8 blocks ...uccessful. 00:06:15.788 passed 00:06:15.788 Test: blockdev write read size > 128k ...passed 00:06:15.788 Test: blockdev write read invalid size ...passed 00:06:15.788 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:15.788 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:15.788 Test: blockdev write read max offset ...passed 00:06:15.788 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:15.788 Test: blockdev writev readv 8 blocks ...passed 00:06:15.788 Test: blockdev writev readv 30 x 1block ...passed 00:06:15.788 Test: blockdev writev readv block ...passed 00:06:15.788 Test: blockdev writev readv size > 128k ...passed 00:06:15.788 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:15.788 Test: blockdev comparev and writev ...[2024-12-05 16:52:50.094105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d5e38000 len:0x1000 00:06:15.788 [2024-12-05 16:52:50.094470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:15.788 passed 00:06:15.788 Test: blockdev nvme passthru rw ...passed 00:06:15.788 Test: blockdev nvme passthru vendor specific ...[2024-12-05 16:52:50.097696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:06:15.788 Test: blockdev nvme admin passthru ...RP2 0x0 00:06:15.788 [2024-12-05 16:52:50.098045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:15.788 passed 00:06:15.788 Test: blockdev copy ...passed 00:06:15.788 Suite: bdevio tests on: Nvme1n1 00:06:15.788 Test: blockdev write read block ...passed 00:06:15.788 Test: blockdev write zeroes read block ...passed 00:06:15.788 Test: blockdev write zeroes read no split ...passed 00:06:15.788 Test: blockdev write zeroes read split ...passed 00:06:15.788 Test: blockdev write zeroes read split partial ...passed 00:06:15.788 Test: blockdev reset ...[2024-12-05 16:52:50.152571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:16.047 [2024-12-05 16:52:50.156198] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:06:16.047 Test: blockdev write read 8 blocks ...uccessful. 00:06:16.047 passed 00:06:16.047 Test: blockdev write read size > 128k ...passed 00:06:16.047 Test: blockdev write read invalid size ...passed 00:06:16.047 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:16.047 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:16.047 Test: blockdev write read max offset ...passed 00:06:16.047 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:16.047 Test: blockdev writev readv 8 blocks ...passed 00:06:16.047 Test: blockdev writev readv 30 x 1block ...passed 00:06:16.047 Test: blockdev writev readv block ...passed 00:06:16.047 Test: blockdev writev readv size > 128k ...passed 00:06:16.047 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:16.047 Test: blockdev comparev and writev ...[2024-12-05 16:52:50.174372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d5e34000 len:0x1000 00:06:16.047 [2024-12-05 16:52:50.174422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:16.047 passed 00:06:16.047 Test: blockdev nvme passthru rw ...passed 00:06:16.047 Test: blockdev nvme passthru vendor specific ...passed 00:06:16.047 Test: blockdev nvme admin passthru ...[2024-12-05 16:52:50.176444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:16.047 [2024-12-05 16:52:50.176483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:16.047 passed 00:06:16.047 Test: blockdev copy ...passed 00:06:16.047 Suite: bdevio tests on: Nvme0n1 00:06:16.047 Test: blockdev write read block ...passed 00:06:16.047 Test: blockdev write zeroes read block ...passed 00:06:16.047 Test: blockdev write zeroes read no split ...passed 00:06:16.047 Test: blockdev write zeroes read split ...passed 00:06:16.047 Test: blockdev write zeroes read split partial ...passed 00:06:16.047 Test: blockdev reset ...[2024-12-05 16:52:50.235602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:16.047 [2024-12-05 16:52:50.239508] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spasseduccessful. 00:06:16.047 00:06:16.047 Test: blockdev write read 8 blocks ...passed 00:06:16.047 Test: blockdev write read size > 128k ...passed 00:06:16.047 Test: blockdev write read invalid size ...passed 00:06:16.047 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:16.047 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:16.047 Test: blockdev write read max offset ...passed 00:06:16.047 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:16.047 Test: blockdev writev readv 8 blocks ...passed 00:06:16.047 Test: blockdev writev readv 30 x 1block ...passed 00:06:16.047 Test: blockdev writev readv block ...passed 00:06:16.047 Test: blockdev writev readv size > 128k ...passed 00:06:16.047 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:16.047 Test: blockdev comparev and writev ...passed 00:06:16.047 Test: blockdev nvme passthru rw ...[2024-12-05 16:52:50.254544] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:16.047 separate metadata which is not supported yet. 00:06:16.047 passed 00:06:16.047 Test: blockdev nvme passthru vendor specific ...passed 00:06:16.047 Test: blockdev nvme admin passthru ...[2024-12-05 16:52:50.256009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:16.047 [2024-12-05 16:52:50.256059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:16.047 passed 00:06:16.047 Test: blockdev copy ...passed 00:06:16.047 00:06:16.047 Run Summary: Type Total Ran Passed Failed Inactive 00:06:16.047 suites 6 6 n/a 0 0 00:06:16.047 tests 138 138 138 0 0 00:06:16.047 asserts 893 893 893 0 n/a 00:06:16.047 00:06:16.047 Elapsed time = 1.172 seconds 00:06:16.047 0 00:06:16.048 16:52:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59844 00:06:16.048 16:52:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 59844 ']' 00:06:16.048 16:52:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 59844 00:06:16.048 16:52:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:16.048 16:52:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.048 16:52:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59844 00:06:16.048 16:52:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.048 killing process with pid 59844 00:06:16.048 16:52:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.048 16:52:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59844' 00:06:16.048 16:52:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 59844 00:06:16.048 16:52:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 59844 00:06:16.614 16:52:50 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:16.614 00:06:16.614 real 0m2.162s 00:06:16.614 user 0m5.502s 00:06:16.614 sys 0m0.277s 00:06:16.614 16:52:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.614 16:52:50 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:16.614 ************************************ 00:06:16.614 END TEST bdev_bounds 00:06:16.614 ************************************ 00:06:16.872 16:52:51 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:16.872 16:52:51 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:16.872 16:52:51 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.872 16:52:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:16.872 ************************************ 00:06:16.872 START TEST bdev_nbd 00:06:16.872 ************************************ 00:06:16.872 16:52:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:16.872 16:52:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:16.872 16:52:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:16.872 16:52:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.872 16:52:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:16.872 16:52:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:16.872 16:52:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:16.872 16:52:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:16.872 16:52:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:16.872 16:52:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:16.872 16:52:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:16.872 16:52:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:16.872 16:52:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:16.872 16:52:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:16.872 16:52:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:16.872 16:52:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:16.872 16:52:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59898 00:06:16.872 16:52:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:16.872 16:52:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:16.872 16:52:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59898 /var/tmp/spdk-nbd.sock 00:06:16.872 16:52:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 59898 ']' 00:06:16.872 16:52:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:16.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:16.872 16:52:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.872 16:52:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:16.872 16:52:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.872 16:52:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:16.872 [2024-12-05 16:52:51.099157] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:16.872 [2024-12-05 16:52:51.099388] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:17.131 [2024-12-05 16:52:51.254802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.131 [2024-12-05 16:52:51.350232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.696 16:52:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.696 16:52:51 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:17.696 16:52:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:17.696 16:52:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.696 16:52:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:17.696 16:52:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:17.696 16:52:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:17.696 16:52:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.696 16:52:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:17.696 16:52:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:17.696 16:52:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:17.696 16:52:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:17.696 16:52:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:17.696 16:52:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:17.696 16:52:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:17.955 16:52:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:17.955 16:52:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:17.955 16:52:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:17.955 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:17.955 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:17.955 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:17.955 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:17.955 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:17.955 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:17.955 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:17.955 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:17.955 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:17.955 1+0 records in 00:06:17.955 1+0 records out 00:06:17.955 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000987569 s, 4.1 MB/s 00:06:17.955 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.955 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:17.955 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:17.955 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:17.955 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:17.955 16:52:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:17.955 16:52:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:17.955 16:52:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:18.213 16:52:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:18.214 16:52:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:18.214 16:52:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:18.214 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:18.214 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.214 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.214 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.214 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:18.214 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.214 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.214 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.214 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.214 1+0 records in 00:06:18.214 1+0 records out 00:06:18.214 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000874152 s, 4.7 MB/s 00:06:18.214 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.214 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.214 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.214 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.214 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.214 16:52:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:18.214 16:52:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:18.214 16:52:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:18.472 16:52:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:18.472 16:52:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:18.472 16:52:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:18.472 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:18.472 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.472 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.472 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.472 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:18.472 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.472 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.472 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.472 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.472 1+0 records in 00:06:18.472 1+0 records out 00:06:18.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000951233 s, 4.3 MB/s 00:06:18.472 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.472 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.472 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.472 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.472 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.472 16:52:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:18.472 16:52:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:18.472 16:52:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:18.730 16:52:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:18.730 16:52:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:18.730 16:52:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:18.730 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:18.730 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.730 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.730 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.730 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:18.730 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.730 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.730 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.730 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.730 1+0 records in 00:06:18.730 1+0 records out 00:06:18.730 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00099572 s, 4.1 MB/s 00:06:18.730 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.730 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.730 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.730 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.730 16:52:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.730 16:52:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:18.730 16:52:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:18.730 16:52:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:18.988 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:18.988 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:18.988 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:18.988 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:18.988 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.988 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.988 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.988 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:18.988 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.988 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.989 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.989 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.989 1+0 records in 00:06:18.989 1+0 records out 00:06:18.989 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110232 s, 3.7 MB/s 00:06:18.989 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.989 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:18.989 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:18.989 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:18.989 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:18.989 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:18.989 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:18.989 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:18.989 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:18.989 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:18.989 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:18.989 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:18.989 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:18.989 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:18.989 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:18.989 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:18.989 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:18.989 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:18.989 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:18.989 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:18.989 1+0 records in 00:06:18.989 1+0 records out 00:06:18.989 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114201 s, 3.6 MB/s 00:06:19.247 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.247 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:19.247 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:19.247 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:19.247 16:52:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:19.247 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:19.247 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:19.247 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.247 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:19.247 { 00:06:19.247 "nbd_device": "/dev/nbd0", 00:06:19.247 "bdev_name": "Nvme0n1" 00:06:19.247 }, 00:06:19.247 { 00:06:19.247 "nbd_device": "/dev/nbd1", 00:06:19.247 "bdev_name": "Nvme1n1" 00:06:19.247 }, 00:06:19.247 { 00:06:19.247 "nbd_device": "/dev/nbd2", 00:06:19.247 "bdev_name": "Nvme2n1" 00:06:19.247 }, 00:06:19.247 { 00:06:19.247 "nbd_device": "/dev/nbd3", 00:06:19.247 "bdev_name": "Nvme2n2" 00:06:19.247 }, 00:06:19.247 { 00:06:19.247 "nbd_device": "/dev/nbd4", 00:06:19.247 "bdev_name": "Nvme2n3" 00:06:19.247 }, 00:06:19.247 { 00:06:19.247 "nbd_device": "/dev/nbd5", 00:06:19.247 "bdev_name": "Nvme3n1" 00:06:19.247 } 00:06:19.247 ]' 00:06:19.247 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:19.247 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:19.247 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:19.247 { 00:06:19.247 "nbd_device": "/dev/nbd0", 00:06:19.247 "bdev_name": "Nvme0n1" 00:06:19.247 }, 00:06:19.247 { 00:06:19.247 "nbd_device": "/dev/nbd1", 00:06:19.247 "bdev_name": "Nvme1n1" 00:06:19.247 }, 00:06:19.247 { 00:06:19.247 "nbd_device": "/dev/nbd2", 00:06:19.247 "bdev_name": "Nvme2n1" 00:06:19.247 }, 00:06:19.247 { 00:06:19.247 "nbd_device": "/dev/nbd3", 00:06:19.247 "bdev_name": "Nvme2n2" 00:06:19.247 }, 00:06:19.247 { 00:06:19.247 "nbd_device": "/dev/nbd4", 00:06:19.247 "bdev_name": "Nvme2n3" 00:06:19.247 }, 00:06:19.247 { 00:06:19.247 "nbd_device": "/dev/nbd5", 00:06:19.247 "bdev_name": "Nvme3n1" 00:06:19.247 } 00:06:19.247 ]' 00:06:19.247 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:19.247 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.248 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:19.248 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.248 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:19.248 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.248 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:19.506 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:19.506 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:19.506 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:19.506 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.506 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.506 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:19.506 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:19.506 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.506 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.506 16:52:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:19.764 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:19.764 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:19.764 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:19.764 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.764 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.764 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:19.764 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:19.764 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.764 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.764 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:20.022 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:20.022 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:20.023 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:20.023 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.023 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.023 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:20.023 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.023 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.023 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.023 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:20.281 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:20.281 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:20.281 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:20.281 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.281 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.281 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:20.281 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.281 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.281 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.281 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:20.281 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:20.281 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:20.281 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:20.281 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.281 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.281 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:20.281 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.281 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.281 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.281 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:20.539 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:20.539 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:20.539 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:20.539 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.539 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.539 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:20.539 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:20.539 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.539 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.539 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.539 16:52:54 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:20.796 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:21.077 /dev/nbd0 00:06:21.077 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:21.077 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:21.077 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:21.077 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:21.078 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.078 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.078 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:21.078 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:21.078 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.078 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.078 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:21.078 1+0 records in 00:06:21.078 1+0 records out 00:06:21.078 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00094487 s, 4.3 MB/s 00:06:21.078 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.078 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:21.078 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.078 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.078 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:21.078 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.078 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:21.078 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:21.356 /dev/nbd1 00:06:21.356 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:21.356 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:21.356 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:21.356 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:21.356 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.356 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.356 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:21.356 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:21.356 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.356 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.356 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:21.356 1+0 records in 00:06:21.356 1+0 records out 00:06:21.356 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000952514 s, 4.3 MB/s 00:06:21.356 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.356 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:21.356 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.356 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.356 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:21.356 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.356 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:21.356 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:21.614 /dev/nbd10 00:06:21.614 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:21.614 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:21.614 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:21.614 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:21.614 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.614 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.614 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:21.614 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:21.614 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.614 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.614 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:21.614 1+0 records in 00:06:21.614 1+0 records out 00:06:21.614 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.001004 s, 4.1 MB/s 00:06:21.614 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.614 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:21.614 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.614 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.614 16:52:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:21.614 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.614 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:21.614 16:52:55 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:21.873 /dev/nbd11 00:06:21.873 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:21.873 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:21.873 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:21.873 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:21.873 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:21.873 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:21.873 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:21.873 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:21.873 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:21.873 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:21.873 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:21.873 1+0 records in 00:06:21.873 1+0 records out 00:06:21.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000955584 s, 4.3 MB/s 00:06:21.873 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.873 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:21.873 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:21.873 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:21.873 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:21.873 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:21.873 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:21.873 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:22.131 /dev/nbd12 00:06:22.131 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:22.131 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:22.131 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:22.131 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:22.131 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.131 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.131 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:22.131 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:22.131 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.131 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.131 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:22.131 1+0 records in 00:06:22.131 1+0 records out 00:06:22.131 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00158212 s, 2.6 MB/s 00:06:22.131 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.131 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:22.131 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.131 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.131 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:22.131 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.131 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:22.131 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:22.131 /dev/nbd13 00:06:22.131 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:22.390 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:22.390 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:22.390 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:22.390 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.390 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.390 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:22.390 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:22.390 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.390 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.390 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:22.390 1+0 records in 00:06:22.390 1+0 records out 00:06:22.390 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000918948 s, 4.5 MB/s 00:06:22.390 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.390 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:22.390 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:22.390 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.390 16:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:22.390 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.390 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:22.390 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:22.391 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.391 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:22.391 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:22.391 { 00:06:22.391 "nbd_device": "/dev/nbd0", 00:06:22.391 "bdev_name": "Nvme0n1" 00:06:22.391 }, 00:06:22.391 { 00:06:22.391 "nbd_device": "/dev/nbd1", 00:06:22.391 "bdev_name": "Nvme1n1" 00:06:22.391 }, 00:06:22.391 { 00:06:22.391 "nbd_device": "/dev/nbd10", 00:06:22.391 "bdev_name": "Nvme2n1" 00:06:22.391 }, 00:06:22.391 { 00:06:22.391 "nbd_device": "/dev/nbd11", 00:06:22.391 "bdev_name": "Nvme2n2" 00:06:22.391 }, 00:06:22.391 { 00:06:22.391 "nbd_device": "/dev/nbd12", 00:06:22.391 "bdev_name": "Nvme2n3" 00:06:22.391 }, 00:06:22.391 { 00:06:22.391 "nbd_device": "/dev/nbd13", 00:06:22.391 "bdev_name": "Nvme3n1" 00:06:22.391 } 00:06:22.391 ]' 00:06:22.391 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:22.391 { 00:06:22.391 "nbd_device": "/dev/nbd0", 00:06:22.391 "bdev_name": "Nvme0n1" 00:06:22.391 }, 00:06:22.391 { 00:06:22.391 "nbd_device": "/dev/nbd1", 00:06:22.391 "bdev_name": "Nvme1n1" 00:06:22.391 }, 00:06:22.391 { 00:06:22.391 "nbd_device": "/dev/nbd10", 00:06:22.391 "bdev_name": "Nvme2n1" 00:06:22.391 }, 00:06:22.391 { 00:06:22.391 "nbd_device": "/dev/nbd11", 00:06:22.391 "bdev_name": "Nvme2n2" 00:06:22.391 }, 00:06:22.391 { 00:06:22.391 "nbd_device": "/dev/nbd12", 00:06:22.391 "bdev_name": "Nvme2n3" 00:06:22.391 }, 00:06:22.391 { 00:06:22.391 "nbd_device": "/dev/nbd13", 00:06:22.391 "bdev_name": "Nvme3n1" 00:06:22.391 } 00:06:22.391 ]' 00:06:22.391 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:22.391 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:22.391 /dev/nbd1 00:06:22.391 /dev/nbd10 00:06:22.391 /dev/nbd11 00:06:22.391 /dev/nbd12 00:06:22.391 /dev/nbd13' 00:06:22.391 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:22.391 /dev/nbd1 00:06:22.391 /dev/nbd10 00:06:22.391 /dev/nbd11 00:06:22.391 /dev/nbd12 00:06:22.391 /dev/nbd13' 00:06:22.391 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:22.649 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:22.649 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:22.649 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:22.649 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:22.649 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:22.649 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:22.649 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:22.649 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:22.649 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:22.649 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:22.649 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:22.649 256+0 records in 00:06:22.649 256+0 records out 00:06:22.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00653738 s, 160 MB/s 00:06:22.649 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.649 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:22.649 256+0 records in 00:06:22.649 256+0 records out 00:06:22.649 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.223937 s, 4.7 MB/s 00:06:22.649 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.649 16:52:56 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:22.906 256+0 records in 00:06:22.906 256+0 records out 00:06:22.906 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.191628 s, 5.5 MB/s 00:06:22.906 16:52:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:22.906 16:52:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:23.165 256+0 records in 00:06:23.165 256+0 records out 00:06:23.165 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.228694 s, 4.6 MB/s 00:06:23.165 16:52:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.165 16:52:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:23.423 256+0 records in 00:06:23.423 256+0 records out 00:06:23.423 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.185981 s, 5.6 MB/s 00:06:23.423 16:52:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.423 16:52:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:23.681 256+0 records in 00:06:23.681 256+0 records out 00:06:23.681 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.194563 s, 5.4 MB/s 00:06:23.681 16:52:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.681 16:52:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:23.681 256+0 records in 00:06:23.681 256+0 records out 00:06:23.681 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.162487 s, 6.5 MB/s 00:06:23.681 16:52:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:23.681 16:52:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:23.681 16:52:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.681 16:52:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:23.681 16:52:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:23.681 16:52:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:23.681 16:52:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:23.681 16:52:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.681 16:52:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:23.681 16:52:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.681 16:52:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:23.681 16:52:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.681 16:52:57 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:23.681 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.681 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:23.681 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.681 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:23.681 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.681 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:23.681 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:23.681 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:23.681 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.681 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:23.681 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.681 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:23.681 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.681 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:23.940 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:23.940 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:23.940 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:23.940 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.940 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.940 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:23.940 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:23.940 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.940 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.940 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:24.198 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:24.198 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:24.198 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:24.198 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.198 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.198 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:24.198 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.198 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.198 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.198 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:24.457 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:24.457 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:24.457 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:24.457 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.457 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.457 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:24.457 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.457 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.457 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.457 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:24.715 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:24.715 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:24.715 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:24.715 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.715 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.715 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:24.715 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.715 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.715 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.715 16:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:24.715 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:24.715 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:24.715 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:24.715 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.715 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.716 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:24.975 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.975 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.975 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:24.975 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:24.975 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:24.975 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:24.975 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:24.975 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:24.975 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:24.975 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:24.975 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:24.975 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:24.975 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:24.975 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.975 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.234 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:25.234 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:25.234 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.234 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:25.234 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.234 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:25.234 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:25.234 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:25.234 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:25.234 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:25.234 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:25.234 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:25.234 16:52:59 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:25.234 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.234 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:25.234 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:25.492 malloc_lvol_verify 00:06:25.492 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:25.752 e3457a28-450f-4cdf-b880-920bd218199b 00:06:25.752 16:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:26.012 f95c192c-0c84-4e53-a67a-d31f81c8ed09 00:06:26.012 16:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:26.012 /dev/nbd0 00:06:26.272 16:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:26.272 16:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:26.272 16:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:26.272 16:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:26.272 16:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:26.272 mke2fs 1.47.0 (5-Feb-2023) 00:06:26.272 Discarding device blocks: 0/4096 done 00:06:26.272 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:26.272 00:06:26.272 Allocating group tables: 0/1 done 00:06:26.272 Writing inode tables: 0/1 done 00:06:26.272 Creating journal (1024 blocks): done 00:06:26.272 Writing superblocks and filesystem accounting information: 0/1 done 00:06:26.272 00:06:26.272 16:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:26.272 16:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.272 16:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:26.272 16:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:26.272 16:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:26.272 16:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.272 16:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:26.272 16:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:26.272 16:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:26.272 16:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:26.272 16:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.272 16:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.272 16:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:26.272 16:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:26.272 16:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.272 16:53:00 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59898 00:06:26.272 16:53:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 59898 ']' 00:06:26.272 16:53:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 59898 00:06:26.272 16:53:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:26.272 16:53:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.272 16:53:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59898 00:06:26.533 16:53:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.533 16:53:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.533 16:53:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59898' 00:06:26.533 killing process with pid 59898 00:06:26.533 16:53:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 59898 00:06:26.533 16:53:00 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 59898 00:06:27.105 16:53:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:27.105 00:06:27.105 real 0m10.239s 00:06:27.105 user 0m14.183s 00:06:27.105 sys 0m3.265s 00:06:27.105 16:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.105 16:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:27.105 ************************************ 00:06:27.105 END TEST bdev_nbd 00:06:27.105 ************************************ 00:06:27.105 16:53:01 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:27.105 16:53:01 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:06:27.105 skipping fio tests on NVMe due to multi-ns failures. 00:06:27.105 16:53:01 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:27.105 16:53:01 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:27.105 16:53:01 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:27.105 16:53:01 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:27.105 16:53:01 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.105 16:53:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:27.105 ************************************ 00:06:27.105 START TEST bdev_verify 00:06:27.105 ************************************ 00:06:27.105 16:53:01 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:27.105 [2024-12-05 16:53:01.390989] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:27.105 [2024-12-05 16:53:01.391104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60277 ] 00:06:27.365 [2024-12-05 16:53:01.551546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.365 [2024-12-05 16:53:01.669375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.365 [2024-12-05 16:53:01.669470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.305 Running I/O for 5 seconds... 00:06:30.179 17664.00 IOPS, 69.00 MiB/s [2024-12-05T16:53:05.542Z] 19200.00 IOPS, 75.00 MiB/s [2024-12-05T16:53:06.924Z] 20565.33 IOPS, 80.33 MiB/s [2024-12-05T16:53:07.497Z] 20320.00 IOPS, 79.38 MiB/s [2024-12-05T16:53:07.497Z] 20224.00 IOPS, 79.00 MiB/s 00:06:33.130 Latency(us) 00:06:33.130 [2024-12-05T16:53:07.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:33.130 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:33.130 Verification LBA range: start 0x0 length 0xbd0bd 00:06:33.130 Nvme0n1 : 5.06 1643.25 6.42 0.00 0.00 77571.00 16232.76 96791.63 00:06:33.130 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:33.130 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:33.130 Nvme0n1 : 5.07 1666.15 6.51 0.00 0.00 76549.07 14922.04 101631.21 00:06:33.130 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:33.130 Verification LBA range: start 0x0 length 0xa0000 00:06:33.130 Nvme1n1 : 5.06 1642.75 6.42 0.00 0.00 77466.58 18955.03 89935.56 00:06:33.130 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:33.130 Verification LBA range: start 0xa0000 length 0xa0000 00:06:33.130 Nvme1n1 : 5.07 1665.65 6.51 0.00 0.00 76429.00 16636.06 93968.54 00:06:33.130 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:33.130 Verification LBA range: start 0x0 length 0x80000 00:06:33.130 Nvme2n1 : 5.09 1647.21 6.43 0.00 0.00 76765.95 10233.70 72997.02 00:06:33.130 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:33.130 Verification LBA range: start 0x80000 length 0x80000 00:06:33.130 Nvme2n1 : 5.07 1665.19 6.50 0.00 0.00 76009.78 17745.13 69770.63 00:06:33.130 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:33.130 Verification LBA range: start 0x0 length 0x80000 00:06:33.130 Nvme2n2 : 5.10 1655.14 6.47 0.00 0.00 76390.91 10032.05 75013.51 00:06:33.130 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:33.130 Verification LBA range: start 0x80000 length 0x80000 00:06:33.130 Nvme2n2 : 5.09 1672.65 6.53 0.00 0.00 75559.38 6452.78 68560.74 00:06:33.130 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:33.130 Verification LBA range: start 0x0 length 0x80000 00:06:33.130 Nvme2n3 : 5.11 1654.70 6.46 0.00 0.00 76255.14 10082.46 77433.30 00:06:33.130 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:33.130 Verification LBA range: start 0x80000 length 0x80000 00:06:33.130 Nvme2n3 : 5.09 1672.20 6.53 0.00 0.00 75430.59 6074.68 67754.14 00:06:33.130 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:33.130 Verification LBA range: start 0x0 length 0x20000 00:06:33.130 Nvme3n1 : 5.11 1654.26 6.46 0.00 0.00 76122.20 9931.22 68157.44 00:06:33.130 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:33.130 Verification LBA range: start 0x20000 length 0x20000 00:06:33.130 Nvme3n1 : 5.11 1676.89 6.55 0.00 0.00 75170.01 15123.69 72593.72 00:06:33.130 [2024-12-05T16:53:07.497Z] =================================================================================================================== 00:06:33.130 [2024-12-05T16:53:07.497Z] Total : 19916.04 77.80 0.00 0.00 76304.16 6074.68 101631.21 00:06:34.516 00:06:34.516 real 0m7.437s 00:06:34.516 user 0m13.760s 00:06:34.516 sys 0m0.306s 00:06:34.516 16:53:08 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.516 16:53:08 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:34.516 ************************************ 00:06:34.516 END TEST bdev_verify 00:06:34.516 ************************************ 00:06:34.516 16:53:08 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:34.516 16:53:08 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:34.516 16:53:08 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.516 16:53:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:34.516 ************************************ 00:06:34.516 START TEST bdev_verify_big_io 00:06:34.516 ************************************ 00:06:34.516 16:53:08 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:34.777 [2024-12-05 16:53:08.924774] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:34.777 [2024-12-05 16:53:08.924968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60375 ] 00:06:34.777 [2024-12-05 16:53:09.093987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.038 [2024-12-05 16:53:09.226070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.038 [2024-12-05 16:53:09.226091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.608 Running I/O for 5 seconds... 00:06:40.267 746.00 IOPS, 46.62 MiB/s [2024-12-05T16:53:16.008Z] 2032.50 IOPS, 127.03 MiB/s [2024-12-05T16:53:16.008Z] 2825.67 IOPS, 176.60 MiB/s 00:06:41.641 Latency(us) 00:06:41.642 [2024-12-05T16:53:16.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:41.642 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:41.642 Verification LBA range: start 0x0 length 0xbd0b 00:06:41.642 Nvme0n1 : 5.63 113.68 7.11 0.00 0.00 1086143.65 16837.71 1161499.57 00:06:41.642 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:41.642 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:41.642 Nvme0n1 : 5.61 114.02 7.13 0.00 0.00 1082615.02 27827.59 1148594.02 00:06:41.642 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:41.642 Verification LBA range: start 0x0 length 0xa000 00:06:41.642 Nvme1n1 : 5.63 113.65 7.10 0.00 0.00 1048184.04 107277.39 987274.63 00:06:41.642 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:41.642 Verification LBA range: start 0xa000 length 0xa000 00:06:41.642 Nvme1n1 : 5.62 113.98 7.12 0.00 0.00 1046650.88 115343.36 974369.08 00:06:41.642 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:41.642 Verification LBA range: start 0x0 length 0x8000 00:06:41.642 Nvme2n1 : 5.84 120.48 7.53 0.00 0.00 959771.86 58881.58 1006632.96 00:06:41.642 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:41.642 Verification LBA range: start 0x8000 length 0x8000 00:06:41.642 Nvme2n1 : 5.85 120.40 7.53 0.00 0.00 960040.53 62107.96 922746.88 00:06:41.642 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:41.642 Verification LBA range: start 0x0 length 0x8000 00:06:41.642 Nvme2n2 : 5.90 126.21 7.89 0.00 0.00 888436.53 24197.91 1038896.84 00:06:41.642 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:41.642 Verification LBA range: start 0x8000 length 0x8000 00:06:41.642 Nvme2n2 : 5.92 125.70 7.86 0.00 0.00 891910.40 25508.63 955010.76 00:06:41.642 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:41.642 Verification LBA range: start 0x0 length 0x8000 00:06:41.642 Nvme2n3 : 5.93 130.14 8.13 0.00 0.00 829922.02 32868.82 1058255.16 00:06:41.642 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:41.642 Verification LBA range: start 0x8000 length 0x8000 00:06:41.642 Nvme2n3 : 5.92 129.71 8.11 0.00 0.00 837476.17 43959.53 980821.86 00:06:41.642 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:41.642 Verification LBA range: start 0x0 length 0x2000 00:06:41.642 Nvme3n1 : 6.03 159.33 9.96 0.00 0.00 658832.25 275.69 1077613.49 00:06:41.642 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:41.642 Verification LBA range: start 0x2000 length 0x2000 00:06:41.642 Nvme3n1 : 6.01 149.01 9.31 0.00 0.00 706969.47 857.01 993727.41 00:06:41.642 [2024-12-05T16:53:16.009Z] =================================================================================================================== 00:06:41.642 [2024-12-05T16:53:16.009Z] Total : 1516.31 94.77 0.00 0.00 898624.14 275.69 1161499.57 00:06:43.021 00:06:43.021 real 0m8.451s 00:06:43.021 user 0m15.838s 00:06:43.021 sys 0m0.330s 00:06:43.021 16:53:17 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.021 16:53:17 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:43.021 ************************************ 00:06:43.021 END TEST bdev_verify_big_io 00:06:43.021 ************************************ 00:06:43.021 16:53:17 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:43.021 16:53:17 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:43.021 16:53:17 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.021 16:53:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:43.021 ************************************ 00:06:43.021 START TEST bdev_write_zeroes 00:06:43.021 ************************************ 00:06:43.021 16:53:17 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:43.280 [2024-12-05 16:53:17.412581] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:43.280 [2024-12-05 16:53:17.412674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60484 ] 00:06:43.280 [2024-12-05 16:53:17.562014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.280 [2024-12-05 16:53:17.640942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.849 Running I/O for 1 seconds... 00:06:45.233 63360.00 IOPS, 247.50 MiB/s 00:06:45.233 Latency(us) 00:06:45.233 [2024-12-05T16:53:19.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:45.233 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:45.233 Nvme0n1 : 1.02 10517.14 41.08 0.00 0.00 12149.52 4537.11 23189.66 00:06:45.233 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:45.233 Nvme1n1 : 1.02 10504.37 41.03 0.00 0.00 12148.95 8015.56 19963.27 00:06:45.233 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:45.233 Nvme2n1 : 1.02 10491.78 40.98 0.00 0.00 12106.29 6251.13 20971.52 00:06:45.233 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:45.233 Nvme2n2 : 1.03 10479.25 40.93 0.00 0.00 12104.50 6452.78 20164.92 00:06:45.233 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:45.233 Nvme2n3 : 1.03 10466.73 40.89 0.00 0.00 12102.47 6503.19 20265.75 00:06:45.233 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:45.233 Nvme3n1 : 1.03 10454.25 40.84 0.00 0.00 12098.07 7057.72 20568.22 00:06:45.233 [2024-12-05T16:53:19.600Z] =================================================================================================================== 00:06:45.233 [2024-12-05T16:53:19.600Z] Total : 62913.52 245.76 0.00 0.00 12118.30 4537.11 23189.66 00:06:45.807 00:06:45.807 real 0m2.644s 00:06:45.807 user 0m2.362s 00:06:45.807 sys 0m0.167s 00:06:45.807 ************************************ 00:06:45.807 END TEST bdev_write_zeroes 00:06:45.807 ************************************ 00:06:45.807 16:53:20 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.807 16:53:20 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:45.807 16:53:20 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:45.807 16:53:20 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:45.807 16:53:20 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.807 16:53:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:45.807 ************************************ 00:06:45.807 START TEST bdev_json_nonenclosed 00:06:45.807 ************************************ 00:06:45.807 16:53:20 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:45.807 [2024-12-05 16:53:20.142180] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:45.807 [2024-12-05 16:53:20.142324] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60537 ] 00:06:46.068 [2024-12-05 16:53:20.305991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.068 [2024-12-05 16:53:20.424509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.068 [2024-12-05 16:53:20.424616] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:46.068 [2024-12-05 16:53:20.424635] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:46.068 [2024-12-05 16:53:20.424646] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.331 ************************************ 00:06:46.331 END TEST bdev_json_nonenclosed 00:06:46.331 ************************************ 00:06:46.331 00:06:46.331 real 0m0.548s 00:06:46.331 user 0m0.323s 00:06:46.331 sys 0m0.119s 00:06:46.331 16:53:20 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.331 16:53:20 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:46.331 16:53:20 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:46.331 16:53:20 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:46.331 16:53:20 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.331 16:53:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:46.331 ************************************ 00:06:46.331 START TEST bdev_json_nonarray 00:06:46.331 ************************************ 00:06:46.331 16:53:20 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:46.593 [2024-12-05 16:53:20.759679] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:46.593 [2024-12-05 16:53:20.759818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60558 ] 00:06:46.593 [2024-12-05 16:53:20.923728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.855 [2024-12-05 16:53:21.053933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.855 [2024-12-05 16:53:21.054079] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:46.855 [2024-12-05 16:53:21.054099] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:46.855 [2024-12-05 16:53:21.054116] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:47.117 00:06:47.117 real 0m0.560s 00:06:47.117 user 0m0.335s 00:06:47.117 sys 0m0.119s 00:06:47.117 ************************************ 00:06:47.117 END TEST bdev_json_nonarray 00:06:47.117 ************************************ 00:06:47.117 16:53:21 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.117 16:53:21 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:47.117 16:53:21 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:06:47.117 16:53:21 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:06:47.117 16:53:21 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:06:47.117 16:53:21 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:06:47.117 16:53:21 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:06:47.117 16:53:21 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:47.117 16:53:21 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:47.117 16:53:21 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:47.117 16:53:21 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:47.117 16:53:21 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:47.117 16:53:21 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:47.117 00:06:47.117 real 0m37.410s 00:06:47.117 user 0m56.915s 00:06:47.117 sys 0m5.589s 00:06:47.117 ************************************ 00:06:47.117 END TEST blockdev_nvme 00:06:47.117 ************************************ 00:06:47.117 16:53:21 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.117 16:53:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:47.117 16:53:21 -- spdk/autotest.sh@209 -- # uname -s 00:06:47.117 16:53:21 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:47.117 16:53:21 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:47.117 16:53:21 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:47.117 16:53:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.117 16:53:21 -- common/autotest_common.sh@10 -- # set +x 00:06:47.117 ************************************ 00:06:47.117 START TEST blockdev_nvme_gpt 00:06:47.117 ************************************ 00:06:47.117 16:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:47.117 * Looking for test storage... 00:06:47.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:47.117 16:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:47.117 16:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:06:47.117 16:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:47.380 16:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.380 16:53:21 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:47.380 16:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.380 16:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:47.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.380 --rc genhtml_branch_coverage=1 00:06:47.380 --rc genhtml_function_coverage=1 00:06:47.380 --rc genhtml_legend=1 00:06:47.380 --rc geninfo_all_blocks=1 00:06:47.380 --rc geninfo_unexecuted_blocks=1 00:06:47.380 00:06:47.380 ' 00:06:47.380 16:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:47.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.380 --rc genhtml_branch_coverage=1 00:06:47.380 --rc genhtml_function_coverage=1 00:06:47.380 --rc genhtml_legend=1 00:06:47.380 --rc geninfo_all_blocks=1 00:06:47.380 --rc geninfo_unexecuted_blocks=1 00:06:47.380 00:06:47.380 ' 00:06:47.380 16:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:47.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.380 --rc genhtml_branch_coverage=1 00:06:47.380 --rc genhtml_function_coverage=1 00:06:47.380 --rc genhtml_legend=1 00:06:47.380 --rc geninfo_all_blocks=1 00:06:47.380 --rc geninfo_unexecuted_blocks=1 00:06:47.380 00:06:47.380 ' 00:06:47.380 16:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:47.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.380 --rc genhtml_branch_coverage=1 00:06:47.380 --rc genhtml_function_coverage=1 00:06:47.380 --rc genhtml_legend=1 00:06:47.380 --rc geninfo_all_blocks=1 00:06:47.380 --rc geninfo_unexecuted_blocks=1 00:06:47.380 00:06:47.380 ' 00:06:47.380 16:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:47.380 16:53:21 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:47.380 16:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:47.380 16:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:47.380 16:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:47.380 16:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:47.380 16:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:47.380 16:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:47.380 16:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:47.380 16:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:47.380 16:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:47.380 16:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:47.380 16:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:06:47.380 16:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:47.380 16:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:47.380 16:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:06:47.380 16:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:47.380 16:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:06:47.380 16:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:47.380 16:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:47.380 16:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:47.380 16:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:06:47.380 16:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:06:47.380 16:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:47.380 16:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60641 00:06:47.380 16:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:47.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.380 16:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60641 00:06:47.380 16:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60641 ']' 00:06:47.381 16:53:21 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:47.381 16:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.381 16:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.381 16:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.381 16:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.381 16:53:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:47.381 [2024-12-05 16:53:21.664170] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:47.381 [2024-12-05 16:53:21.664585] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60641 ] 00:06:47.642 [2024-12-05 16:53:21.832726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.642 [2024-12-05 16:53:21.965361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.586 16:53:22 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.586 16:53:22 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:48.586 16:53:22 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:48.586 16:53:22 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:06:48.586 16:53:22 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:48.848 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:48.848 Waiting for block devices as requested 00:06:49.109 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:49.109 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:49.109 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:49.369 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:54.679 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:54.679 16:53:28 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:54.679 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:54.679 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:54.679 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:54.679 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:54.679 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:54.679 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:54.679 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:06:54.679 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:54.679 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:54.679 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:54.679 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:54.679 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:54.679 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:54.679 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:06:54.679 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:54.679 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:06:54.679 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:54.680 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:54.680 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:54.680 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:54.680 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:06:54.680 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:54.680 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:06:54.680 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:54.680 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:54.680 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:54.680 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:54.680 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:06:54.680 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:54.680 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:54.680 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:54.680 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:54.680 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:06:54.680 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:54.680 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:54.680 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:54.680 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:54.680 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:06:54.680 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:54.680 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:06:54.680 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:54.680 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:54.680 16:53:28 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:54.680 16:53:28 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:54.680 16:53:28 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:54.680 16:53:28 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:54.680 16:53:28 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:54.680 16:53:28 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:54.680 16:53:28 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:54.680 16:53:28 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:54.680 16:53:28 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:54.680 BYT; 00:06:54.680 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:54.680 16:53:28 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:54.680 BYT; 00:06:54.680 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:54.680 16:53:28 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:54.680 16:53:28 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:54.680 16:53:28 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:54.680 16:53:28 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:54.680 16:53:28 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:54.680 16:53:28 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:54.680 16:53:28 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:54.680 16:53:28 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:54.680 16:53:28 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:54.680 16:53:28 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:54.680 16:53:28 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:54.680 16:53:28 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:54.680 16:53:28 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:54.680 16:53:28 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:54.680 16:53:28 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:54.680 16:53:28 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:54.680 16:53:28 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:54.680 16:53:28 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:54.680 16:53:28 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:54.680 16:53:28 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:54.680 16:53:28 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:54.680 16:53:28 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:54.680 16:53:28 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:54.680 16:53:28 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:54.680 16:53:28 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:54.680 16:53:28 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:54.680 16:53:28 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:54.680 16:53:28 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:54.680 16:53:28 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:55.614 The operation has completed successfully. 00:06:55.614 16:53:29 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:56.548 The operation has completed successfully. 00:06:56.548 16:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:56.807 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:57.375 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:57.375 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:57.375 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:57.375 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:57.375 16:53:31 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:57.375 16:53:31 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.375 16:53:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:57.375 [] 00:06:57.375 16:53:31 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.375 16:53:31 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:57.375 16:53:31 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:57.375 16:53:31 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:57.375 16:53:31 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:57.636 16:53:31 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:57.636 16:53:31 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.636 16:53:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:57.895 16:53:32 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.895 16:53:32 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:57.895 16:53:32 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.895 16:53:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:57.895 16:53:32 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.895 16:53:32 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:06:57.895 16:53:32 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:57.895 16:53:32 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.895 16:53:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:57.895 16:53:32 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.895 16:53:32 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:57.895 16:53:32 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.895 16:53:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:57.895 16:53:32 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.895 16:53:32 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:57.895 16:53:32 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.895 16:53:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:57.895 16:53:32 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.895 16:53:32 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:57.895 16:53:32 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:57.895 16:53:32 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:57.895 16:53:32 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.895 16:53:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:57.895 16:53:32 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.896 16:53:32 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:57.896 16:53:32 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:57.896 16:53:32 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "4916e8e9-ddcd-4db4-8a86-f15a185dc863"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "4916e8e9-ddcd-4db4-8a86-f15a185dc863",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "a1a58152-104f-4cc8-ac27-1e197dd6d04e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a1a58152-104f-4cc8-ac27-1e197dd6d04e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "f12e822d-ef3f-40d7-b23d-53385450bbd9"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f12e822d-ef3f-40d7-b23d-53385450bbd9",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "33629f0e-5aed-42cb-9d77-b544341943ae"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "33629f0e-5aed-42cb-9d77-b544341943ae",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "ac666b08-6b08-4f53-8d20-1b7053076df0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "ac666b08-6b08-4f53-8d20-1b7053076df0",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:57.896 16:53:32 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:57.896 16:53:32 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:57.896 16:53:32 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:57.896 16:53:32 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 60641 00:06:57.896 16:53:32 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60641 ']' 00:06:57.896 16:53:32 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60641 00:06:57.896 16:53:32 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:57.896 16:53:32 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.896 16:53:32 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60641 00:06:57.896 16:53:32 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.896 killing process with pid 60641 00:06:57.896 16:53:32 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.896 16:53:32 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60641' 00:06:57.896 16:53:32 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60641 00:06:57.896 16:53:32 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60641 00:06:59.801 16:53:33 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:59.801 16:53:33 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:59.801 16:53:33 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:59.801 16:53:33 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.801 16:53:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:59.801 ************************************ 00:06:59.801 START TEST bdev_hello_world 00:06:59.801 ************************************ 00:06:59.801 16:53:33 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:59.801 [2024-12-05 16:53:33.782470] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:06:59.801 [2024-12-05 16:53:33.782598] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61267 ] 00:06:59.801 [2024-12-05 16:53:33.946560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.801 [2024-12-05 16:53:34.070943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.368 [2024-12-05 16:53:34.657878] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:00.368 [2024-12-05 16:53:34.657920] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:00.368 [2024-12-05 16:53:34.657942] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:00.368 [2024-12-05 16:53:34.660583] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:00.368 [2024-12-05 16:53:34.662209] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:00.368 [2024-12-05 16:53:34.662248] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:00.368 [2024-12-05 16:53:34.662722] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:00.368 00:07:00.368 [2024-12-05 16:53:34.662748] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:01.300 00:07:01.300 real 0m1.653s 00:07:01.300 user 0m1.322s 00:07:01.300 sys 0m0.221s 00:07:01.300 16:53:35 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.300 16:53:35 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:01.300 ************************************ 00:07:01.300 END TEST bdev_hello_world 00:07:01.300 ************************************ 00:07:01.300 16:53:35 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:07:01.300 16:53:35 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:01.300 16:53:35 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.300 16:53:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:01.300 ************************************ 00:07:01.300 START TEST bdev_bounds 00:07:01.300 ************************************ 00:07:01.300 16:53:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:01.300 Process bdevio pid: 61306 00:07:01.300 16:53:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61306 00:07:01.300 16:53:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:01.300 16:53:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61306' 00:07:01.300 16:53:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61306 00:07:01.300 16:53:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61306 ']' 00:07:01.300 16:53:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.300 16:53:35 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:01.300 16:53:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.300 16:53:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.300 16:53:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.300 16:53:35 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:01.300 [2024-12-05 16:53:35.485830] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:01.300 [2024-12-05 16:53:35.485919] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61306 ] 00:07:01.300 [2024-12-05 16:53:35.635034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.557 [2024-12-05 16:53:35.714380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.557 [2024-12-05 16:53:35.714690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.557 [2024-12-05 16:53:35.714768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.121 16:53:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.121 16:53:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:02.121 16:53:36 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:02.121 I/O targets: 00:07:02.121 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:02.121 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:02.121 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:02.121 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:02.121 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:02.121 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:02.121 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:02.121 00:07:02.121 00:07:02.121 CUnit - A unit testing framework for C - Version 2.1-3 00:07:02.121 http://cunit.sourceforge.net/ 00:07:02.121 00:07:02.121 00:07:02.121 Suite: bdevio tests on: Nvme3n1 00:07:02.121 Test: blockdev write read block ...passed 00:07:02.121 Test: blockdev write zeroes read block ...passed 00:07:02.121 Test: blockdev write zeroes read no split ...passed 00:07:02.121 Test: blockdev write zeroes read split ...passed 00:07:02.121 Test: blockdev write zeroes read split partial ...passed 00:07:02.121 Test: blockdev reset ...[2024-12-05 16:53:36.479746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:02.121 [2024-12-05 16:53:36.482234] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:07:02.121 Test: blockdev write read 8 blocks ...uccessful. 00:07:02.121 passed 00:07:02.121 Test: blockdev write read size > 128k ...passed 00:07:02.121 Test: blockdev write read invalid size ...passed 00:07:02.121 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:02.121 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:02.122 Test: blockdev write read max offset ...passed 00:07:02.122 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:02.122 Test: blockdev writev readv 8 blocks ...passed 00:07:02.122 Test: blockdev writev readv 30 x 1block ...passed 00:07:02.122 Test: blockdev writev readv block ...passed 00:07:02.380 Test: blockdev writev readv size > 128k ...passed 00:07:02.380 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:02.380 Test: blockdev comparev and writev ...[2024-12-05 16:53:36.489880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b4404000 len:0x1000 00:07:02.380 [2024-12-05 16:53:36.489932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:02.380 passed 00:07:02.380 Test: blockdev nvme passthru rw ...passed 00:07:02.380 Test: blockdev nvme passthru vendor specific ...passed 00:07:02.380 Test: blockdev nvme admin passthru ...[2024-12-05 16:53:36.490730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:02.380 [2024-12-05 16:53:36.490755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:02.380 passed 00:07:02.380 Test: blockdev copy ...passed 00:07:02.380 Suite: bdevio tests on: Nvme2n3 00:07:02.380 Test: blockdev write read block ...passed 00:07:02.380 Test: blockdev write zeroes read block ...passed 00:07:02.380 Test: blockdev write zeroes read no split ...passed 00:07:02.380 Test: blockdev write zeroes read split ...passed 00:07:02.380 Test: blockdev write zeroes read split partial ...passed 00:07:02.380 Test: blockdev reset ...[2024-12-05 16:53:36.544230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:02.380 [2024-12-05 16:53:36.547178] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spasseduccessful. 00:07:02.380 00:07:02.380 Test: blockdev write read 8 blocks ...passed 00:07:02.380 Test: blockdev write read size > 128k ...passed 00:07:02.380 Test: blockdev write read invalid size ...passed 00:07:02.380 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:02.380 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:02.380 Test: blockdev write read max offset ...passed 00:07:02.380 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:02.380 Test: blockdev writev readv 8 blocks ...passed 00:07:02.380 Test: blockdev writev readv 30 x 1block ...passed 00:07:02.380 Test: blockdev writev readv block ...passed 00:07:02.380 Test: blockdev writev readv size > 128k ...passed 00:07:02.380 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:02.380 Test: blockdev comparev and writev ...[2024-12-05 16:53:36.555677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:07:02.380 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2b4402000 len:0x1000 00:07:02.380 [2024-12-05 16:53:36.555799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:02.380 passed 00:07:02.380 Test: blockdev nvme passthru vendor specific ...passed 00:07:02.380 Test: blockdev nvme admin passthru ...[2024-12-05 16:53:36.556565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:02.380 [2024-12-05 16:53:36.556597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:02.380 passed 00:07:02.380 Test: blockdev copy ...passed 00:07:02.380 Suite: bdevio tests on: Nvme2n2 00:07:02.380 Test: blockdev write read block ...passed 00:07:02.380 Test: blockdev write zeroes read block ...passed 00:07:02.380 Test: blockdev write zeroes read no split ...passed 00:07:02.380 Test: blockdev write zeroes read split ...passed 00:07:02.380 Test: blockdev write zeroes read split partial ...passed 00:07:02.380 Test: blockdev reset ...[2024-12-05 16:53:36.608149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:02.380 [2024-12-05 16:53:36.610911] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:02.380 passed 00:07:02.380 Test: blockdev write read 8 blocks ...passed 00:07:02.380 Test: blockdev write read size > 128k ...passed 00:07:02.380 Test: blockdev write read invalid size ...passed 00:07:02.380 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:02.380 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:02.380 Test: blockdev write read max offset ...passed 00:07:02.380 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:02.380 Test: blockdev writev readv 8 blocks ...passed 00:07:02.380 Test: blockdev writev readv 30 x 1block ...passed 00:07:02.380 Test: blockdev writev readv block ...passed 00:07:02.380 Test: blockdev writev readv size > 128k ...passed 00:07:02.380 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:02.380 Test: blockdev comparev and writev ...[2024-12-05 16:53:36.618253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7438000 len:0x1000 00:07:02.380 [2024-12-05 16:53:36.618377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:02.380 passed 00:07:02.380 Test: blockdev nvme passthru rw ...passed 00:07:02.380 Test: blockdev nvme passthru vendor specific ...[2024-12-05 16:53:36.619300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:02.380 [2024-12-05 16:53:36.619393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:07:02.380 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:07:02.380 passed 00:07:02.380 Test: blockdev copy ...passed 00:07:02.380 Suite: bdevio tests on: Nvme2n1 00:07:02.380 Test: blockdev write read block ...passed 00:07:02.380 Test: blockdev write zeroes read block ...passed 00:07:02.380 Test: blockdev write zeroes read no split ...passed 00:07:02.380 Test: blockdev write zeroes read split ...passed 00:07:02.380 Test: blockdev write zeroes read split partial ...passed 00:07:02.380 Test: blockdev reset ...[2024-12-05 16:53:36.669249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:02.380 [2024-12-05 16:53:36.671821] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:02.380 passed 00:07:02.380 Test: blockdev write read 8 blocks ...passed 00:07:02.380 Test: blockdev write read size > 128k ...passed 00:07:02.380 Test: blockdev write read invalid size ...passed 00:07:02.380 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:02.380 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:02.380 Test: blockdev write read max offset ...passed 00:07:02.380 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:02.380 Test: blockdev writev readv 8 blocks ...passed 00:07:02.380 Test: blockdev writev readv 30 x 1block ...passed 00:07:02.380 Test: blockdev writev readv block ...passed 00:07:02.380 Test: blockdev writev readv size > 128k ...passed 00:07:02.380 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:02.380 Test: blockdev comparev and writev ...[2024-12-05 16:53:36.679845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d7434000 len:0x1000 00:07:02.380 [2024-12-05 16:53:36.679882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:02.380 passed 00:07:02.380 Test: blockdev nvme passthru rw ...passed 00:07:02.380 Test: blockdev nvme passthru vendor specific ...passed 00:07:02.380 Test: blockdev nvme admin passthru ...[2024-12-05 16:53:36.680672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:02.380 [2024-12-05 16:53:36.680701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:02.380 passed 00:07:02.380 Test: blockdev copy ...passed 00:07:02.380 Suite: bdevio tests on: Nvme1n1p2 00:07:02.380 Test: blockdev write read block ...passed 00:07:02.380 Test: blockdev write zeroes read block ...passed 00:07:02.380 Test: blockdev write zeroes read no split ...passed 00:07:02.380 Test: blockdev write zeroes read split ...passed 00:07:02.380 Test: blockdev write zeroes read split partial ...passed 00:07:02.380 Test: blockdev reset ...[2024-12-05 16:53:36.730763] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:02.380 [2024-12-05 16:53:36.733114] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:02.380 passed 00:07:02.380 Test: blockdev write read 8 blocks ...passed 00:07:02.380 Test: blockdev write read size > 128k ...passed 00:07:02.380 Test: blockdev write read invalid size ...passed 00:07:02.380 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:02.380 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:02.380 Test: blockdev write read max offset ...passed 00:07:02.380 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:02.380 Test: blockdev writev readv 8 blocks ...passed 00:07:02.380 Test: blockdev writev readv 30 x 1block ...passed 00:07:02.380 Test: blockdev writev readv block ...passed 00:07:02.380 Test: blockdev writev readv size > 128k ...passed 00:07:02.380 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:02.380 Test: blockdev comparev and writev ...[2024-12-05 16:53:36.741075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 lpassed 00:07:02.380 Test: blockdev nvme passthru rw ...passed 00:07:02.380 Test: blockdev nvme passthru vendor specific ...passed 00:07:02.380 Test: blockdev nvme admin passthru ...passed 00:07:02.381 Test: blockdev copy ...en:1 SGL DATA BLOCK ADDRESS 0x2d7430000 len:0x1000 00:07:02.381 [2024-12-05 16:53:36.741183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:02.381 passed 00:07:02.381 Suite: bdevio tests on: Nvme1n1p1 00:07:02.381 Test: blockdev write read block ...passed 00:07:02.381 Test: blockdev write zeroes read block ...passed 00:07:02.639 Test: blockdev write zeroes read no split ...passed 00:07:02.639 Test: blockdev write zeroes read split ...passed 00:07:02.639 Test: blockdev write zeroes read split partial ...passed 00:07:02.639 Test: blockdev reset ...[2024-12-05 16:53:36.781941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:02.639 [2024-12-05 16:53:36.785190] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:02.639 passed 00:07:02.639 Test: blockdev write read 8 blocks ...passed 00:07:02.639 Test: blockdev write read size > 128k ...passed 00:07:02.639 Test: blockdev write read invalid size ...passed 00:07:02.639 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:02.639 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:02.639 Test: blockdev write read max offset ...passed 00:07:02.639 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:02.639 Test: blockdev writev readv 8 blocks ...passed 00:07:02.639 Test: blockdev writev readv 30 x 1block ...passed 00:07:02.639 Test: blockdev writev readv block ...passed 00:07:02.639 Test: blockdev writev readv size > 128k ...passed 00:07:02.639 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:02.639 Test: blockdev comparev and writev ...[2024-12-05 16:53:36.793548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b4e0e000 len:0x1000 00:07:02.639 [2024-12-05 16:53:36.793672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:02.639 passed 00:07:02.639 Test: blockdev nvme passthru rw ...passed 00:07:02.639 Test: blockdev nvme passthru vendor specific ...passed 00:07:02.639 Test: blockdev nvme admin passthru ...passed 00:07:02.639 Test: blockdev copy ...passed 00:07:02.639 Suite: bdevio tests on: Nvme0n1 00:07:02.639 Test: blockdev write read block ...passed 00:07:02.639 Test: blockdev write zeroes read block ...passed 00:07:02.639 Test: blockdev write zeroes read no split ...passed 00:07:02.639 Test: blockdev write zeroes read split ...passed 00:07:02.639 Test: blockdev write zeroes read split partial ...passed 00:07:02.639 Test: blockdev reset ...[2024-12-05 16:53:36.836229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:02.639 [2024-12-05 16:53:36.838860] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spasseduccessful. 00:07:02.639 00:07:02.639 Test: blockdev write read 8 blocks ...passed 00:07:02.639 Test: blockdev write read size > 128k ...passed 00:07:02.639 Test: blockdev write read invalid size ...passed 00:07:02.639 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:02.639 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:02.639 Test: blockdev write read max offset ...passed 00:07:02.639 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:02.639 Test: blockdev writev readv 8 blocks ...passed 00:07:02.639 Test: blockdev writev readv 30 x 1block ...passed 00:07:02.639 Test: blockdev writev readv block ...passed 00:07:02.639 Test: blockdev writev readv size > 128k ...passed 00:07:02.639 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:02.639 Test: blockdev comparev and writev ...[2024-12-05 16:53:36.846943] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:02.639 separate metadata which is not supported yet. 00:07:02.639 passed 00:07:02.639 Test: blockdev nvme passthru rw ...passed 00:07:02.639 Test: blockdev nvme passthru vendor specific ...[2024-12-05 16:53:36.847617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:02.639 [2024-12-05 16:53:36.847747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0passed 00:07:02.639 Test: blockdev nvme admin passthru ... sqhd:0017 p:1 m:0 dnr:1 00:07:02.639 passed 00:07:02.639 Test: blockdev copy ...passed 00:07:02.639 00:07:02.639 Run Summary: Type Total Ran Passed Failed Inactive 00:07:02.639 suites 7 7 n/a 0 0 00:07:02.639 tests 161 161 161 0 0 00:07:02.639 asserts 1025 1025 1025 0 n/a 00:07:02.639 00:07:02.639 Elapsed time = 1.084 seconds 00:07:02.639 0 00:07:02.639 16:53:36 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61306 00:07:02.639 16:53:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61306 ']' 00:07:02.639 16:53:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61306 00:07:02.639 16:53:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:02.639 16:53:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.639 16:53:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61306 00:07:02.639 16:53:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.639 16:53:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.639 16:53:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61306' 00:07:02.639 killing process with pid 61306 00:07:02.639 16:53:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61306 00:07:02.639 16:53:36 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61306 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:03.574 00:07:03.574 real 0m2.146s 00:07:03.574 user 0m5.582s 00:07:03.574 sys 0m0.259s 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:03.574 ************************************ 00:07:03.574 END TEST bdev_bounds 00:07:03.574 ************************************ 00:07:03.574 16:53:37 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:03.574 16:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:03.574 16:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.574 16:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:03.574 ************************************ 00:07:03.574 START TEST bdev_nbd 00:07:03.574 ************************************ 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61365 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61365 /var/tmp/spdk-nbd.sock 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61365 ']' 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.574 16:53:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:03.574 [2024-12-05 16:53:37.708769] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:03.574 [2024-12-05 16:53:37.709028] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:03.574 [2024-12-05 16:53:37.862151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.833 [2024-12-05 16:53:37.957525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.399 16:53:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.399 16:53:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:04.399 16:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:04.399 16:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.399 16:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:04.399 16:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:04.399 16:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:04.399 16:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.399 16:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:04.399 16:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:04.399 16:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:04.399 16:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:04.399 16:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:04.399 16:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:04.399 16:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:04.657 16:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:04.657 16:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:04.657 16:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:04.657 16:53:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:04.657 16:53:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.657 16:53:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.657 16:53:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.657 16:53:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:04.657 16:53:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.657 16:53:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.657 16:53:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.657 16:53:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.657 1+0 records in 00:07:04.657 1+0 records out 00:07:04.657 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385487 s, 10.6 MB/s 00:07:04.657 16:53:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.657 16:53:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.658 16:53:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.658 16:53:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.658 16:53:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.658 16:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.658 16:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:04.658 16:53:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:04.658 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:04.658 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:04.658 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:04.658 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:04.658 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.658 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.658 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.658 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:04.658 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.658 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.658 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.658 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.658 1+0 records in 00:07:04.658 1+0 records out 00:07:04.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297327 s, 13.8 MB/s 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.916 1+0 records in 00:07:04.916 1+0 records out 00:07:04.916 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000438844 s, 9.3 MB/s 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:04.916 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:05.175 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:05.175 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:05.175 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:05.175 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:05.175 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.175 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.175 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.175 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:05.175 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.175 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.175 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.175 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.175 1+0 records in 00:07:05.175 1+0 records out 00:07:05.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463062 s, 8.8 MB/s 00:07:05.175 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.175 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.175 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.175 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.175 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.175 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.175 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:05.175 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:05.433 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:05.433 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:05.433 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:05.433 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:05.433 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.433 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.433 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.433 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:05.433 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.433 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.433 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.433 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.433 1+0 records in 00:07:05.433 1+0 records out 00:07:05.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549001 s, 7.5 MB/s 00:07:05.433 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.433 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.433 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.433 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.433 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.433 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.433 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:05.433 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:05.691 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:05.691 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:05.691 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:05.691 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:05.691 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.691 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.691 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.691 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:05.691 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.691 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.691 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.691 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.691 1+0 records in 00:07:05.691 1+0 records out 00:07:05.691 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000549678 s, 7.5 MB/s 00:07:05.691 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.691 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.691 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.691 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.691 16:53:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.691 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.691 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:05.691 16:53:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:05.950 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:05.950 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:05.950 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:05.950 16:53:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:05.950 16:53:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.950 16:53:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.950 16:53:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.950 16:53:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:05.950 16:53:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.950 16:53:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.950 16:53:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.950 16:53:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.950 1+0 records in 00:07:05.950 1+0 records out 00:07:05.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321591 s, 12.7 MB/s 00:07:05.950 16:53:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.950 16:53:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.950 16:53:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.950 16:53:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.950 16:53:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.950 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.950 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:05.950 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:06.209 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:06.209 { 00:07:06.209 "nbd_device": "/dev/nbd0", 00:07:06.209 "bdev_name": "Nvme0n1" 00:07:06.209 }, 00:07:06.209 { 00:07:06.209 "nbd_device": "/dev/nbd1", 00:07:06.209 "bdev_name": "Nvme1n1p1" 00:07:06.209 }, 00:07:06.209 { 00:07:06.209 "nbd_device": "/dev/nbd2", 00:07:06.209 "bdev_name": "Nvme1n1p2" 00:07:06.209 }, 00:07:06.209 { 00:07:06.209 "nbd_device": "/dev/nbd3", 00:07:06.209 "bdev_name": "Nvme2n1" 00:07:06.209 }, 00:07:06.209 { 00:07:06.209 "nbd_device": "/dev/nbd4", 00:07:06.209 "bdev_name": "Nvme2n2" 00:07:06.209 }, 00:07:06.209 { 00:07:06.209 "nbd_device": "/dev/nbd5", 00:07:06.209 "bdev_name": "Nvme2n3" 00:07:06.209 }, 00:07:06.209 { 00:07:06.209 "nbd_device": "/dev/nbd6", 00:07:06.209 "bdev_name": "Nvme3n1" 00:07:06.209 } 00:07:06.209 ]' 00:07:06.209 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:06.209 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:06.209 { 00:07:06.209 "nbd_device": "/dev/nbd0", 00:07:06.209 "bdev_name": "Nvme0n1" 00:07:06.209 }, 00:07:06.209 { 00:07:06.209 "nbd_device": "/dev/nbd1", 00:07:06.209 "bdev_name": "Nvme1n1p1" 00:07:06.209 }, 00:07:06.209 { 00:07:06.209 "nbd_device": "/dev/nbd2", 00:07:06.209 "bdev_name": "Nvme1n1p2" 00:07:06.209 }, 00:07:06.209 { 00:07:06.209 "nbd_device": "/dev/nbd3", 00:07:06.209 "bdev_name": "Nvme2n1" 00:07:06.209 }, 00:07:06.209 { 00:07:06.209 "nbd_device": "/dev/nbd4", 00:07:06.209 "bdev_name": "Nvme2n2" 00:07:06.209 }, 00:07:06.209 { 00:07:06.209 "nbd_device": "/dev/nbd5", 00:07:06.209 "bdev_name": "Nvme2n3" 00:07:06.209 }, 00:07:06.209 { 00:07:06.209 "nbd_device": "/dev/nbd6", 00:07:06.209 "bdev_name": "Nvme3n1" 00:07:06.209 } 00:07:06.209 ]' 00:07:06.209 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:06.209 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:06.209 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:06.209 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:06.209 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:06.209 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:06.209 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.209 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.209 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.209 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.209 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.209 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.209 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.209 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.209 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.209 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.209 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.209 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:06.468 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:06.468 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:06.468 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:06.468 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.468 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.468 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:06.468 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.468 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.468 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.468 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:06.727 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:06.727 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:06.727 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:06.727 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.727 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.727 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:06.727 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.727 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.727 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.727 16:53:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:06.984 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:06.984 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:06.984 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:06.984 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.984 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.984 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:06.984 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.984 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.984 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.984 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:07.243 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:07.243 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:07.243 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:07.243 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.243 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.243 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:07.243 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.243 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.243 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.243 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:07.243 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:07.243 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:07.243 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:07.243 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.243 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.243 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:07.243 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.243 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.243 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:07.243 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:07.501 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:07.501 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:07.501 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:07.501 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.501 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.501 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:07.501 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.501 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.501 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.501 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.501 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.759 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:07.759 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:07.759 16:53:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.759 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:07.759 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:07.759 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.759 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:07.759 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:07.759 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:07.759 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:07.759 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:07.759 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:07.759 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:07.759 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.759 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:07.759 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:07.759 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:07.759 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:07.759 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:07.759 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.759 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:07.759 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:07.759 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:07.759 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:07.759 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:07.759 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:07.759 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:07.759 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:08.017 /dev/nbd0 00:07:08.017 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:08.017 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:08.017 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:08.017 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.017 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.017 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.017 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:08.017 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.017 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.017 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.017 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.017 1+0 records in 00:07:08.017 1+0 records out 00:07:08.017 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433825 s, 9.4 MB/s 00:07:08.017 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.017 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.017 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.017 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.017 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.017 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.017 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:08.017 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:08.275 /dev/nbd1 00:07:08.275 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:08.275 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:08.275 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:08.275 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.275 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.275 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.275 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:08.275 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.275 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.275 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.275 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.275 1+0 records in 00:07:08.275 1+0 records out 00:07:08.275 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000522834 s, 7.8 MB/s 00:07:08.275 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.275 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.275 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.275 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.275 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.275 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.275 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:08.275 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:08.534 /dev/nbd10 00:07:08.534 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:08.534 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:08.534 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:08.534 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.534 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.534 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.534 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:08.534 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.534 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.534 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.534 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.534 1+0 records in 00:07:08.534 1+0 records out 00:07:08.534 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526891 s, 7.8 MB/s 00:07:08.534 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.534 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.534 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.534 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.534 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.534 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.534 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:08.534 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:08.534 /dev/nbd11 00:07:08.534 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:08.534 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:08.534 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:08.534 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.534 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.534 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.534 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:08.793 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.793 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.793 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.793 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.793 1+0 records in 00:07:08.793 1+0 records out 00:07:08.793 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472103 s, 8.7 MB/s 00:07:08.793 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.793 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.793 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.793 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.793 16:53:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.793 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.793 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:08.794 16:53:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:08.794 /dev/nbd12 00:07:08.794 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:08.794 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:08.794 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:08.794 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.794 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.794 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.794 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:08.794 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.794 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.794 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.794 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.794 1+0 records in 00:07:08.794 1+0 records out 00:07:08.794 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000484239 s, 8.5 MB/s 00:07:08.794 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.794 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.794 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.794 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.794 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.794 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.794 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:08.794 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:09.052 /dev/nbd13 00:07:09.052 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:09.052 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:09.052 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:09.052 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.052 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.052 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.052 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:09.052 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.052 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.052 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.052 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.052 1+0 records in 00:07:09.052 1+0 records out 00:07:09.052 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00049736 s, 8.2 MB/s 00:07:09.052 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.052 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.052 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.052 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.052 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.052 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.052 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:09.052 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:09.311 /dev/nbd14 00:07:09.311 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:09.311 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:09.311 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:07:09.311 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:09.311 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:09.311 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:09.311 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:07:09.311 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:09.311 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:09.311 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:09.311 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:09.311 1+0 records in 00:07:09.311 1+0 records out 00:07:09.311 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000465616 s, 8.8 MB/s 00:07:09.311 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.311 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:09.311 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:09.311 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:09.311 16:53:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:09.311 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:09.311 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:09.311 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:09.311 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:09.311 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.569 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:09.569 { 00:07:09.569 "nbd_device": "/dev/nbd0", 00:07:09.569 "bdev_name": "Nvme0n1" 00:07:09.569 }, 00:07:09.569 { 00:07:09.569 "nbd_device": "/dev/nbd1", 00:07:09.569 "bdev_name": "Nvme1n1p1" 00:07:09.569 }, 00:07:09.569 { 00:07:09.569 "nbd_device": "/dev/nbd10", 00:07:09.569 "bdev_name": "Nvme1n1p2" 00:07:09.569 }, 00:07:09.569 { 00:07:09.569 "nbd_device": "/dev/nbd11", 00:07:09.569 "bdev_name": "Nvme2n1" 00:07:09.569 }, 00:07:09.569 { 00:07:09.569 "nbd_device": "/dev/nbd12", 00:07:09.569 "bdev_name": "Nvme2n2" 00:07:09.569 }, 00:07:09.569 { 00:07:09.569 "nbd_device": "/dev/nbd13", 00:07:09.569 "bdev_name": "Nvme2n3" 00:07:09.569 }, 00:07:09.569 { 00:07:09.569 "nbd_device": "/dev/nbd14", 00:07:09.569 "bdev_name": "Nvme3n1" 00:07:09.569 } 00:07:09.569 ]' 00:07:09.569 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:09.569 { 00:07:09.569 "nbd_device": "/dev/nbd0", 00:07:09.569 "bdev_name": "Nvme0n1" 00:07:09.569 }, 00:07:09.569 { 00:07:09.569 "nbd_device": "/dev/nbd1", 00:07:09.569 "bdev_name": "Nvme1n1p1" 00:07:09.569 }, 00:07:09.569 { 00:07:09.569 "nbd_device": "/dev/nbd10", 00:07:09.569 "bdev_name": "Nvme1n1p2" 00:07:09.569 }, 00:07:09.569 { 00:07:09.569 "nbd_device": "/dev/nbd11", 00:07:09.569 "bdev_name": "Nvme2n1" 00:07:09.569 }, 00:07:09.569 { 00:07:09.569 "nbd_device": "/dev/nbd12", 00:07:09.569 "bdev_name": "Nvme2n2" 00:07:09.569 }, 00:07:09.569 { 00:07:09.569 "nbd_device": "/dev/nbd13", 00:07:09.569 "bdev_name": "Nvme2n3" 00:07:09.569 }, 00:07:09.569 { 00:07:09.569 "nbd_device": "/dev/nbd14", 00:07:09.569 "bdev_name": "Nvme3n1" 00:07:09.569 } 00:07:09.569 ]' 00:07:09.569 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.569 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:09.569 /dev/nbd1 00:07:09.569 /dev/nbd10 00:07:09.569 /dev/nbd11 00:07:09.569 /dev/nbd12 00:07:09.569 /dev/nbd13 00:07:09.569 /dev/nbd14' 00:07:09.569 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.569 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:09.569 /dev/nbd1 00:07:09.569 /dev/nbd10 00:07:09.569 /dev/nbd11 00:07:09.569 /dev/nbd12 00:07:09.569 /dev/nbd13 00:07:09.569 /dev/nbd14' 00:07:09.569 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:09.569 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:09.570 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:09.570 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:09.570 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:09.570 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:09.570 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:09.570 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:09.570 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:09.570 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:09.570 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:09.570 256+0 records in 00:07:09.570 256+0 records out 00:07:09.570 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111878 s, 93.7 MB/s 00:07:09.570 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.570 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:09.570 256+0 records in 00:07:09.570 256+0 records out 00:07:09.570 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0745721 s, 14.1 MB/s 00:07:09.570 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.570 16:53:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:09.828 256+0 records in 00:07:09.828 256+0 records out 00:07:09.828 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0771422 s, 13.6 MB/s 00:07:09.828 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.828 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:09.828 256+0 records in 00:07:09.828 256+0 records out 00:07:09.828 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0771525 s, 13.6 MB/s 00:07:09.828 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.828 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:09.828 256+0 records in 00:07:09.828 256+0 records out 00:07:09.828 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0761642 s, 13.8 MB/s 00:07:09.828 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.828 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:10.171 256+0 records in 00:07:10.171 256+0 records out 00:07:10.171 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130714 s, 8.0 MB/s 00:07:10.171 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.171 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:10.171 256+0 records in 00:07:10.171 256+0 records out 00:07:10.171 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15937 s, 6.6 MB/s 00:07:10.171 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.171 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:10.435 256+0 records in 00:07:10.435 256+0 records out 00:07:10.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.215472 s, 4.9 MB/s 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.435 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:10.693 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:10.693 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:10.693 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:10.693 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.693 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.693 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:10.693 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.693 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.693 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.693 16:53:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:10.949 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:10.949 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:10.949 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:10.949 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.949 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.949 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:10.949 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.949 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.949 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.949 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:10.949 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:11.206 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:11.206 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:11.206 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.206 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.206 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:11.206 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.206 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.206 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.206 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:11.206 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:11.206 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:11.206 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:11.206 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.206 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.206 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:11.206 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.206 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.206 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.206 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:11.463 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:11.463 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:11.463 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:11.463 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.463 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.463 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:11.463 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.463 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.463 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.463 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:11.721 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:11.721 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:11.721 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:11.721 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.721 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.721 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:11.721 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.721 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.721 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.721 16:53:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:11.721 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:11.721 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:11.721 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:11.721 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.721 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.721 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:11.721 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.721 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.721 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.721 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.721 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:11.977 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:11.977 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:11.977 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.977 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:11.977 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:11.977 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.977 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:11.977 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:11.977 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:11.977 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:11.977 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:11.977 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:11.977 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:11.977 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.977 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:11.977 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:12.233 malloc_lvol_verify 00:07:12.233 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:12.490 270a9008-61be-451a-87cb-53c1137b8562 00:07:12.491 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:12.748 2250359f-c66c-4657-94a8-e22829072b32 00:07:12.748 16:53:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:12.748 /dev/nbd0 00:07:12.748 16:53:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:12.748 16:53:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:12.748 16:53:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:12.748 16:53:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:12.748 16:53:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:12.748 mke2fs 1.47.0 (5-Feb-2023) 00:07:12.748 Discarding device blocks: 0/4096 done 00:07:12.748 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:12.748 00:07:12.748 Allocating group tables: 0/1 done 00:07:12.748 Writing inode tables: 0/1 done 00:07:12.748 Creating journal (1024 blocks): done 00:07:12.748 Writing superblocks and filesystem accounting information: 0/1 done 00:07:12.748 00:07:12.748 16:53:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:12.748 16:53:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.748 16:53:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:12.748 16:53:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:12.748 16:53:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:12.748 16:53:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.748 16:53:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:13.009 16:53:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:13.009 16:53:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:13.009 16:53:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:13.009 16:53:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.009 16:53:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.009 16:53:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:13.009 16:53:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:13.009 16:53:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.009 16:53:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61365 00:07:13.009 16:53:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61365 ']' 00:07:13.009 16:53:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61365 00:07:13.009 16:53:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:13.009 16:53:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.009 16:53:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61365 00:07:13.009 16:53:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.009 16:53:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.009 killing process with pid 61365 00:07:13.009 16:53:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61365' 00:07:13.009 16:53:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61365 00:07:13.009 16:53:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61365 00:07:13.950 16:53:48 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:13.950 00:07:13.950 real 0m10.468s 00:07:13.950 user 0m14.741s 00:07:13.950 sys 0m3.408s 00:07:13.950 16:53:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.950 16:53:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:13.950 ************************************ 00:07:13.950 END TEST bdev_nbd 00:07:13.950 ************************************ 00:07:13.951 16:53:48 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:07:13.951 16:53:48 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:07:13.951 16:53:48 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:07:13.951 skipping fio tests on NVMe due to multi-ns failures. 00:07:13.951 16:53:48 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:13.951 16:53:48 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:13.951 16:53:48 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:13.951 16:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:13.951 16:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.951 16:53:48 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.951 ************************************ 00:07:13.951 START TEST bdev_verify 00:07:13.951 ************************************ 00:07:13.951 16:53:48 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:13.951 [2024-12-05 16:53:48.215730] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:13.951 [2024-12-05 16:53:48.215842] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61775 ] 00:07:14.211 [2024-12-05 16:53:48.374855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.211 [2024-12-05 16:53:48.478754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.211 [2024-12-05 16:53:48.478939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.781 Running I/O for 5 seconds... 00:07:17.106 22016.00 IOPS, 86.00 MiB/s [2024-12-05T16:53:52.416Z] 21952.00 IOPS, 85.75 MiB/s [2024-12-05T16:53:53.358Z] 21866.67 IOPS, 85.42 MiB/s [2024-12-05T16:53:54.301Z] 22272.00 IOPS, 87.00 MiB/s [2024-12-05T16:53:54.301Z] 22579.20 IOPS, 88.20 MiB/s 00:07:19.934 Latency(us) 00:07:19.934 [2024-12-05T16:53:54.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:19.934 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:19.934 Verification LBA range: start 0x0 length 0xbd0bd 00:07:19.934 Nvme0n1 : 5.06 1592.18 6.22 0.00 0.00 80165.88 16938.54 77030.01 00:07:19.934 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:19.934 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:19.934 Nvme0n1 : 5.05 1596.45 6.24 0.00 0.00 79945.52 16837.71 73400.32 00:07:19.934 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:19.934 Verification LBA range: start 0x0 length 0x4ff80 00:07:19.935 Nvme1n1p1 : 5.07 1591.69 6.22 0.00 0.00 80019.22 18450.90 66947.54 00:07:19.935 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:19.935 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:19.935 Nvme1n1p1 : 5.05 1595.95 6.23 0.00 0.00 79846.63 16736.89 67754.14 00:07:19.935 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:19.935 Verification LBA range: start 0x0 length 0x4ff7f 00:07:19.935 Nvme1n1p2 : 5.07 1590.69 6.21 0.00 0.00 79909.93 17644.31 65334.35 00:07:19.935 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:19.935 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:19.935 Nvme1n1p2 : 5.05 1595.51 6.23 0.00 0.00 79722.53 15728.64 63721.16 00:07:19.935 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:19.935 Verification LBA range: start 0x0 length 0x80000 00:07:19.935 Nvme2n1 : 5.07 1589.65 6.21 0.00 0.00 79782.96 17140.18 60898.07 00:07:19.935 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:19.935 Verification LBA range: start 0x80000 length 0x80000 00:07:19.935 Nvme2n1 : 5.06 1595.05 6.23 0.00 0.00 79613.41 15829.46 61301.37 00:07:19.935 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:19.935 Verification LBA range: start 0x0 length 0x80000 00:07:19.935 Nvme2n2 : 5.08 1588.59 6.21 0.00 0.00 79658.81 16938.54 63317.86 00:07:19.935 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:19.935 Verification LBA range: start 0x80000 length 0x80000 00:07:19.935 Nvme2n2 : 5.07 1603.26 6.26 0.00 0.00 79085.28 2848.30 64527.75 00:07:19.935 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:19.935 Verification LBA range: start 0x0 length 0x80000 00:07:19.935 Nvme2n3 : 5.08 1588.18 6.20 0.00 0.00 79517.17 12855.14 65737.65 00:07:19.935 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:19.935 Verification LBA range: start 0x80000 length 0x80000 00:07:19.935 Nvme2n3 : 5.07 1602.22 6.26 0.00 0.00 78977.09 5217.67 67350.84 00:07:19.935 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:19.935 Verification LBA range: start 0x0 length 0x20000 00:07:19.935 Nvme3n1 : 5.09 1608.38 6.28 0.00 0.00 78542.87 5242.88 69770.63 00:07:19.935 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:19.935 Verification LBA range: start 0x20000 length 0x20000 00:07:19.935 Nvme3n1 : 5.08 1611.43 6.29 0.00 0.00 78487.32 6276.33 70173.93 00:07:19.935 [2024-12-05T16:53:54.302Z] =================================================================================================================== 00:07:19.935 [2024-12-05T16:53:54.302Z] Total : 22349.21 87.30 0.00 0.00 79516.79 2848.30 77030.01 00:07:21.327 00:07:21.327 real 0m7.365s 00:07:21.327 user 0m13.833s 00:07:21.327 sys 0m0.215s 00:07:21.327 16:53:55 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.327 16:53:55 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:21.327 ************************************ 00:07:21.327 END TEST bdev_verify 00:07:21.327 ************************************ 00:07:21.327 16:53:55 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:21.327 16:53:55 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:21.327 16:53:55 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.327 16:53:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:21.327 ************************************ 00:07:21.327 START TEST bdev_verify_big_io 00:07:21.327 ************************************ 00:07:21.327 16:53:55 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:21.327 [2024-12-05 16:53:55.618654] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:21.327 [2024-12-05 16:53:55.618767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61868 ] 00:07:21.589 [2024-12-05 16:53:55.776428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:21.589 [2024-12-05 16:53:55.879351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.589 [2024-12-05 16:53:55.879428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.535 Running I/O for 5 seconds... 00:07:27.338 2506.00 IOPS, 156.62 MiB/s [2024-12-05T16:54:02.643Z] 3055.50 IOPS, 190.97 MiB/s [2024-12-05T16:54:02.904Z] 3170.00 IOPS, 198.12 MiB/s 00:07:28.537 Latency(us) 00:07:28.537 [2024-12-05T16:54:02.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.537 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:28.537 Verification LBA range: start 0x0 length 0xbd0b 00:07:28.537 Nvme0n1 : 5.93 99.75 6.23 0.00 0.00 1229165.32 23290.49 1542213.32 00:07:28.537 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:28.537 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:28.537 Nvme0n1 : 5.91 114.40 7.15 0.00 0.00 1041965.65 19358.33 1213121.77 00:07:28.537 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:28.537 Verification LBA range: start 0x0 length 0x4ff8 00:07:28.537 Nvme1n1p1 : 5.73 114.57 7.16 0.00 0.00 1051587.21 107277.39 1690627.15 00:07:28.537 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:28.537 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:28.537 Nvme1n1p1 : 5.94 124.89 7.81 0.00 0.00 948765.61 99614.72 1032444.06 00:07:28.537 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:28.537 Verification LBA range: start 0x0 length 0x4ff7 00:07:28.537 Nvme1n1p2 : 5.79 118.89 7.43 0.00 0.00 995299.86 57268.38 1716438.25 00:07:28.537 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:28.537 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:28.537 Nvme1n1p2 : 5.94 120.79 7.55 0.00 0.00 964194.36 28634.19 1587382.74 00:07:28.537 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:28.537 Verification LBA range: start 0x0 length 0x8000 00:07:28.537 Nvme2n1 : 5.91 130.07 8.13 0.00 0.00 884144.56 83482.78 1148594.02 00:07:28.537 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:28.537 Verification LBA range: start 0x8000 length 0x8000 00:07:28.537 Nvme2n1 : 5.97 126.05 7.88 0.00 0.00 898997.99 17644.31 1619646.62 00:07:28.537 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:28.537 Verification LBA range: start 0x0 length 0x8000 00:07:28.537 Nvme2n2 : 5.87 130.75 8.17 0.00 0.00 858834.31 83079.48 1019538.51 00:07:28.537 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:28.537 Verification LBA range: start 0x8000 length 0x8000 00:07:28.537 Nvme2n2 : 5.97 125.47 7.84 0.00 0.00 869856.16 20064.10 1645457.72 00:07:28.537 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:28.537 Verification LBA range: start 0x0 length 0x8000 00:07:28.537 Nvme2n3 : 5.93 139.75 8.73 0.00 0.00 786658.40 17543.48 1161499.57 00:07:28.537 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:28.537 Verification LBA range: start 0x8000 length 0x8000 00:07:28.537 Nvme2n3 : 6.01 139.68 8.73 0.00 0.00 757823.17 16837.71 1258291.20 00:07:28.537 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:28.537 Verification LBA range: start 0x0 length 0x2000 00:07:28.537 Nvme3n1 : 5.94 150.79 9.42 0.00 0.00 711082.29 2747.47 1058255.16 00:07:28.537 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:28.537 Verification LBA range: start 0x2000 length 0x2000 00:07:28.537 Nvme3n1 : 6.07 174.53 10.91 0.00 0.00 596754.01 272.54 1535760.54 00:07:28.537 [2024-12-05T16:54:02.904Z] =================================================================================================================== 00:07:28.537 [2024-12-05T16:54:02.904Z] Total : 1810.39 113.15 0.00 0.00 878321.45 272.54 1716438.25 00:07:30.454 00:07:30.454 real 0m8.930s 00:07:30.454 user 0m16.838s 00:07:30.454 sys 0m0.292s 00:07:30.454 16:54:04 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.454 ************************************ 00:07:30.454 END TEST bdev_verify_big_io 00:07:30.454 ************************************ 00:07:30.454 16:54:04 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:30.454 16:54:04 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:30.454 16:54:04 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:30.454 16:54:04 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.454 16:54:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:30.454 ************************************ 00:07:30.454 START TEST bdev_write_zeroes 00:07:30.454 ************************************ 00:07:30.454 16:54:04 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:30.454 [2024-12-05 16:54:04.623873] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:30.454 [2024-12-05 16:54:04.624035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61977 ] 00:07:30.454 [2024-12-05 16:54:04.782490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.715 [2024-12-05 16:54:04.906975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.307 Running I/O for 1 seconds... 00:07:32.246 65856.00 IOPS, 257.25 MiB/s 00:07:32.246 Latency(us) 00:07:32.246 [2024-12-05T16:54:06.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:32.246 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:32.246 Nvme0n1 : 1.02 9372.33 36.61 0.00 0.00 13626.48 6049.48 25407.80 00:07:32.246 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:32.246 Nvme1n1p1 : 1.03 9361.02 36.57 0.00 0.00 13619.89 11141.12 24903.68 00:07:32.246 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:32.246 Nvme1n1p2 : 1.03 9349.65 36.52 0.00 0.00 13586.37 11040.30 24097.08 00:07:32.246 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:32.246 Nvme2n1 : 1.03 9339.19 36.48 0.00 0.00 13571.53 11191.53 23391.31 00:07:32.246 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:32.246 Nvme2n2 : 1.03 9328.79 36.44 0.00 0.00 13532.78 11090.71 22786.36 00:07:32.246 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:32.246 Nvme2n3 : 1.03 9318.34 36.40 0.00 0.00 13527.37 11141.12 23391.31 00:07:32.246 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:32.246 Nvme3n1 : 1.03 9307.81 36.36 0.00 0.00 13488.78 8670.92 25306.98 00:07:32.246 [2024-12-05T16:54:06.613Z] =================================================================================================================== 00:07:32.246 [2024-12-05T16:54:06.613Z] Total : 65377.14 255.38 0.00 0.00 13564.74 6049.48 25407.80 00:07:33.189 00:07:33.189 real 0m2.849s 00:07:33.189 user 0m2.483s 00:07:33.189 sys 0m0.242s 00:07:33.189 16:54:07 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.189 ************************************ 00:07:33.189 END TEST bdev_write_zeroes 00:07:33.189 ************************************ 00:07:33.189 16:54:07 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:33.189 16:54:07 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:33.189 16:54:07 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:33.189 16:54:07 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.189 16:54:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:33.189 ************************************ 00:07:33.189 START TEST bdev_json_nonenclosed 00:07:33.189 ************************************ 00:07:33.189 16:54:07 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:33.189 [2024-12-05 16:54:07.533334] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:33.189 [2024-12-05 16:54:07.533481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62030 ] 00:07:33.450 [2024-12-05 16:54:07.698841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.710 [2024-12-05 16:54:07.828770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.710 [2024-12-05 16:54:07.828873] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:33.710 [2024-12-05 16:54:07.828893] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:33.710 [2024-12-05 16:54:07.828904] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:33.710 00:07:33.710 real 0m0.562s 00:07:33.710 user 0m0.341s 00:07:33.710 sys 0m0.114s 00:07:33.710 ************************************ 00:07:33.710 END TEST bdev_json_nonenclosed 00:07:33.710 ************************************ 00:07:33.710 16:54:08 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.710 16:54:08 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:33.710 16:54:08 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:33.710 16:54:08 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:33.710 16:54:08 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.710 16:54:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:33.970 ************************************ 00:07:33.970 START TEST bdev_json_nonarray 00:07:33.970 ************************************ 00:07:33.970 16:54:08 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:33.970 [2024-12-05 16:54:08.150431] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:33.970 [2024-12-05 16:54:08.150569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62061 ] 00:07:33.970 [2024-12-05 16:54:08.315212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.231 [2024-12-05 16:54:08.439537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.231 [2024-12-05 16:54:08.439658] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:34.231 [2024-12-05 16:54:08.439679] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:34.231 [2024-12-05 16:54:08.439689] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:34.492 00:07:34.492 real 0m0.555s 00:07:34.492 user 0m0.339s 00:07:34.492 sys 0m0.110s 00:07:34.492 16:54:08 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.492 ************************************ 00:07:34.492 END TEST bdev_json_nonarray 00:07:34.492 ************************************ 00:07:34.492 16:54:08 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:34.492 16:54:08 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:07:34.492 16:54:08 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:07:34.492 16:54:08 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:34.492 16:54:08 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:34.492 16:54:08 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.492 16:54:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:34.492 ************************************ 00:07:34.492 START TEST bdev_gpt_uuid 00:07:34.492 ************************************ 00:07:34.492 16:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:34.492 16:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:07:34.492 16:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:07:34.492 16:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62081 00:07:34.492 16:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:34.492 16:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62081 00:07:34.492 16:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62081 ']' 00:07:34.492 16:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.492 16:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.492 16:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.492 16:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.492 16:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:34.492 16:54:08 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:34.492 [2024-12-05 16:54:08.787416] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:34.492 [2024-12-05 16:54:08.787561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62081 ] 00:07:34.753 [2024-12-05 16:54:08.954698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.753 [2024-12-05 16:54:09.077431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.694 16:54:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.694 16:54:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:35.694 16:54:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:35.694 16:54:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.694 16:54:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:35.955 Some configs were skipped because the RPC state that can call them passed over. 00:07:35.955 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.955 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:07:35.955 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.955 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:35.955 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.955 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:35.955 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.955 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:35.955 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.956 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:07:35.956 { 00:07:35.956 "name": "Nvme1n1p1", 00:07:35.956 "aliases": [ 00:07:35.956 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:35.956 ], 00:07:35.956 "product_name": "GPT Disk", 00:07:35.956 "block_size": 4096, 00:07:35.956 "num_blocks": 655104, 00:07:35.956 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:35.956 "assigned_rate_limits": { 00:07:35.956 "rw_ios_per_sec": 0, 00:07:35.956 "rw_mbytes_per_sec": 0, 00:07:35.956 "r_mbytes_per_sec": 0, 00:07:35.956 "w_mbytes_per_sec": 0 00:07:35.956 }, 00:07:35.956 "claimed": false, 00:07:35.956 "zoned": false, 00:07:35.956 "supported_io_types": { 00:07:35.956 "read": true, 00:07:35.956 "write": true, 00:07:35.956 "unmap": true, 00:07:35.956 "flush": true, 00:07:35.956 "reset": true, 00:07:35.956 "nvme_admin": false, 00:07:35.956 "nvme_io": false, 00:07:35.956 "nvme_io_md": false, 00:07:35.956 "write_zeroes": true, 00:07:35.956 "zcopy": false, 00:07:35.956 "get_zone_info": false, 00:07:35.956 "zone_management": false, 00:07:35.956 "zone_append": false, 00:07:35.956 "compare": true, 00:07:35.956 "compare_and_write": false, 00:07:35.956 "abort": true, 00:07:35.956 "seek_hole": false, 00:07:35.956 "seek_data": false, 00:07:35.956 "copy": true, 00:07:35.956 "nvme_iov_md": false 00:07:35.956 }, 00:07:35.956 "driver_specific": { 00:07:35.956 "gpt": { 00:07:35.956 "base_bdev": "Nvme1n1", 00:07:35.956 "offset_blocks": 256, 00:07:35.956 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:35.956 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:35.956 "partition_name": "SPDK_TEST_first" 00:07:35.956 } 00:07:35.956 } 00:07:35.956 } 00:07:35.956 ]' 00:07:35.956 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:07:35.956 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:07:35.956 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:07:35.956 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:35.956 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:35.956 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:35.956 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:35.956 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.956 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:35.956 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.956 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:07:35.956 { 00:07:35.956 "name": "Nvme1n1p2", 00:07:35.956 "aliases": [ 00:07:35.956 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:35.956 ], 00:07:35.956 "product_name": "GPT Disk", 00:07:35.956 "block_size": 4096, 00:07:35.956 "num_blocks": 655103, 00:07:35.956 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:35.956 "assigned_rate_limits": { 00:07:35.956 "rw_ios_per_sec": 0, 00:07:35.956 "rw_mbytes_per_sec": 0, 00:07:35.956 "r_mbytes_per_sec": 0, 00:07:35.956 "w_mbytes_per_sec": 0 00:07:35.956 }, 00:07:35.956 "claimed": false, 00:07:35.956 "zoned": false, 00:07:35.956 "supported_io_types": { 00:07:35.956 "read": true, 00:07:35.956 "write": true, 00:07:35.956 "unmap": true, 00:07:35.956 "flush": true, 00:07:35.956 "reset": true, 00:07:35.956 "nvme_admin": false, 00:07:35.956 "nvme_io": false, 00:07:35.956 "nvme_io_md": false, 00:07:35.956 "write_zeroes": true, 00:07:35.956 "zcopy": false, 00:07:35.956 "get_zone_info": false, 00:07:35.956 "zone_management": false, 00:07:35.956 "zone_append": false, 00:07:35.956 "compare": true, 00:07:35.956 "compare_and_write": false, 00:07:35.956 "abort": true, 00:07:35.956 "seek_hole": false, 00:07:35.956 "seek_data": false, 00:07:35.956 "copy": true, 00:07:35.956 "nvme_iov_md": false 00:07:35.956 }, 00:07:35.956 "driver_specific": { 00:07:35.956 "gpt": { 00:07:35.956 "base_bdev": "Nvme1n1", 00:07:35.956 "offset_blocks": 655360, 00:07:35.956 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:35.956 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:35.956 "partition_name": "SPDK_TEST_second" 00:07:35.956 } 00:07:35.956 } 00:07:35.956 } 00:07:35.956 ]' 00:07:35.956 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:07:35.956 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:07:35.956 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:07:35.956 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:35.956 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:36.218 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:36.218 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 62081 00:07:36.218 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62081 ']' 00:07:36.218 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62081 00:07:36.218 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:36.218 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.218 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62081 00:07:36.218 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:36.218 killing process with pid 62081 00:07:36.218 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:36.218 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62081' 00:07:36.218 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62081 00:07:36.218 16:54:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62081 00:07:38.134 00:07:38.134 real 0m3.306s 00:07:38.134 user 0m3.329s 00:07:38.134 sys 0m0.487s 00:07:38.134 ************************************ 00:07:38.134 END TEST bdev_gpt_uuid 00:07:38.134 ************************************ 00:07:38.134 16:54:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.134 16:54:12 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:38.134 16:54:12 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:07:38.134 16:54:12 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:38.134 16:54:12 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:07:38.134 16:54:12 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:38.134 16:54:12 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:38.134 16:54:12 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:38.134 16:54:12 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:38.134 16:54:12 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:38.134 16:54:12 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:38.134 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:38.396 Waiting for block devices as requested 00:07:38.396 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:38.396 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:38.396 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:38.658 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:43.944 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:43.944 16:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:43.944 16:54:17 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:44.206 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:44.206 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:44.206 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:44.206 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:44.206 16:54:18 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:44.206 00:07:44.206 real 0m57.046s 00:07:44.206 user 1m11.979s 00:07:44.206 sys 0m8.163s 00:07:44.206 16:54:18 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.206 16:54:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:44.206 ************************************ 00:07:44.206 END TEST blockdev_nvme_gpt 00:07:44.206 ************************************ 00:07:44.206 16:54:18 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:44.206 16:54:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:44.206 16:54:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.206 16:54:18 -- common/autotest_common.sh@10 -- # set +x 00:07:44.206 ************************************ 00:07:44.206 START TEST nvme 00:07:44.206 ************************************ 00:07:44.206 16:54:18 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:44.206 * Looking for test storage... 00:07:44.468 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:44.468 16:54:18 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:44.468 16:54:18 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:07:44.468 16:54:18 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:44.468 16:54:18 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:44.468 16:54:18 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.468 16:54:18 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.468 16:54:18 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.468 16:54:18 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.468 16:54:18 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.468 16:54:18 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.468 16:54:18 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.468 16:54:18 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.468 16:54:18 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.468 16:54:18 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.468 16:54:18 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.468 16:54:18 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:44.468 16:54:18 nvme -- scripts/common.sh@345 -- # : 1 00:07:44.468 16:54:18 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.468 16:54:18 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.468 16:54:18 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:44.468 16:54:18 nvme -- scripts/common.sh@353 -- # local d=1 00:07:44.468 16:54:18 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.468 16:54:18 nvme -- scripts/common.sh@355 -- # echo 1 00:07:44.468 16:54:18 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.468 16:54:18 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:44.468 16:54:18 nvme -- scripts/common.sh@353 -- # local d=2 00:07:44.468 16:54:18 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.468 16:54:18 nvme -- scripts/common.sh@355 -- # echo 2 00:07:44.468 16:54:18 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.468 16:54:18 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.468 16:54:18 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.468 16:54:18 nvme -- scripts/common.sh@368 -- # return 0 00:07:44.468 16:54:18 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.468 16:54:18 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:44.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.468 --rc genhtml_branch_coverage=1 00:07:44.468 --rc genhtml_function_coverage=1 00:07:44.468 --rc genhtml_legend=1 00:07:44.468 --rc geninfo_all_blocks=1 00:07:44.468 --rc geninfo_unexecuted_blocks=1 00:07:44.468 00:07:44.468 ' 00:07:44.468 16:54:18 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:44.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.468 --rc genhtml_branch_coverage=1 00:07:44.468 --rc genhtml_function_coverage=1 00:07:44.468 --rc genhtml_legend=1 00:07:44.468 --rc geninfo_all_blocks=1 00:07:44.468 --rc geninfo_unexecuted_blocks=1 00:07:44.468 00:07:44.468 ' 00:07:44.468 16:54:18 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:44.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.468 --rc genhtml_branch_coverage=1 00:07:44.468 --rc genhtml_function_coverage=1 00:07:44.468 --rc genhtml_legend=1 00:07:44.468 --rc geninfo_all_blocks=1 00:07:44.468 --rc geninfo_unexecuted_blocks=1 00:07:44.468 00:07:44.468 ' 00:07:44.468 16:54:18 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:44.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.468 --rc genhtml_branch_coverage=1 00:07:44.468 --rc genhtml_function_coverage=1 00:07:44.468 --rc genhtml_legend=1 00:07:44.468 --rc geninfo_all_blocks=1 00:07:44.468 --rc geninfo_unexecuted_blocks=1 00:07:44.468 00:07:44.468 ' 00:07:44.468 16:54:18 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:45.040 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:45.302 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:45.563 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:45.563 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:45.563 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:45.563 16:54:19 nvme -- nvme/nvme.sh@79 -- # uname 00:07:45.563 16:54:19 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:45.563 16:54:19 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:45.563 16:54:19 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:45.563 16:54:19 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:45.563 16:54:19 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:45.563 16:54:19 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:45.563 Waiting for stub to ready for secondary processes... 00:07:45.563 16:54:19 nvme -- common/autotest_common.sh@1075 -- # stubpid=62729 00:07:45.563 16:54:19 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:45.563 16:54:19 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:45.563 16:54:19 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:45.563 16:54:19 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62729 ]] 00:07:45.563 16:54:19 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:45.563 [2024-12-05 16:54:19.859989] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:07:45.564 [2024-12-05 16:54:19.860130] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:46.563 16:54:20 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:46.563 16:54:20 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/62729 ]] 00:07:46.563 16:54:20 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:46.841 [2024-12-05 16:54:20.992687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:46.841 [2024-12-05 16:54:21.110159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.841 [2024-12-05 16:54:21.110437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.841 [2024-12-05 16:54:21.110572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.841 [2024-12-05 16:54:21.128809] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:46.841 [2024-12-05 16:54:21.128859] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:46.841 [2024-12-05 16:54:21.140780] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:46.841 [2024-12-05 16:54:21.140990] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:46.841 [2024-12-05 16:54:21.145096] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:46.841 [2024-12-05 16:54:21.145502] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:46.841 [2024-12-05 16:54:21.145664] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:46.841 [2024-12-05 16:54:21.150542] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:46.841 [2024-12-05 16:54:21.150827] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:46.841 [2024-12-05 16:54:21.150919] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:46.841 [2024-12-05 16:54:21.153998] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:46.841 [2024-12-05 16:54:21.154178] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:46.841 [2024-12-05 16:54:21.154233] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:46.841 [2024-12-05 16:54:21.154307] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:46.841 [2024-12-05 16:54:21.154365] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:47.784 16:54:21 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:47.784 done. 00:07:47.784 16:54:21 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:47.784 16:54:21 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:47.784 16:54:21 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:47.784 16:54:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.784 16:54:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:47.784 ************************************ 00:07:47.784 START TEST nvme_reset 00:07:47.784 ************************************ 00:07:47.784 16:54:21 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:47.784 Initializing NVMe Controllers 00:07:47.784 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:47.784 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:47.784 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:47.784 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:47.784 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:47.784 00:07:47.784 real 0m0.233s 00:07:47.784 user 0m0.079s 00:07:47.784 sys 0m0.105s 00:07:47.784 ************************************ 00:07:47.784 END TEST nvme_reset 00:07:47.784 ************************************ 00:07:47.784 16:54:22 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.784 16:54:22 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:47.784 16:54:22 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:47.784 16:54:22 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.784 16:54:22 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.784 16:54:22 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:48.048 ************************************ 00:07:48.048 START TEST nvme_identify 00:07:48.048 ************************************ 00:07:48.048 16:54:22 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:48.048 16:54:22 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:48.048 16:54:22 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:48.048 16:54:22 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:48.048 16:54:22 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:48.048 16:54:22 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:48.048 16:54:22 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:48.048 16:54:22 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:48.048 16:54:22 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:48.048 16:54:22 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:48.048 16:54:22 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:48.048 16:54:22 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:48.048 16:54:22 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:48.048 [2024-12-05 16:54:22.399642] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62762 terminated unexpected 00:07:48.048 ===================================================== 00:07:48.048 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:48.048 ===================================================== 00:07:48.048 Controller Capabilities/Features 00:07:48.048 ================================ 00:07:48.048 Vendor ID: 1b36 00:07:48.048 Subsystem Vendor ID: 1af4 00:07:48.048 Serial Number: 12343 00:07:48.048 Model Number: QEMU NVMe Ctrl 00:07:48.048 Firmware Version: 8.0.0 00:07:48.048 Recommended Arb Burst: 6 00:07:48.048 IEEE OUI Identifier: 00 54 52 00:07:48.048 Multi-path I/O 00:07:48.048 May have multiple subsystem ports: No 00:07:48.048 May have multiple controllers: Yes 00:07:48.048 Associated with SR-IOV VF: No 00:07:48.048 Max Data Transfer Size: 524288 00:07:48.048 Max Number of Namespaces: 256 00:07:48.048 Max Number of I/O Queues: 64 00:07:48.048 NVMe Specification Version (VS): 1.4 00:07:48.048 NVMe Specification Version (Identify): 1.4 00:07:48.048 Maximum Queue Entries: 2048 00:07:48.048 Contiguous Queues Required: Yes 00:07:48.048 Arbitration Mechanisms Supported 00:07:48.048 Weighted Round Robin: Not Supported 00:07:48.048 Vendor Specific: Not Supported 00:07:48.048 Reset Timeout: 7500 ms 00:07:48.048 Doorbell Stride: 4 bytes 00:07:48.048 NVM Subsystem Reset: Not Supported 00:07:48.048 Command Sets Supported 00:07:48.048 NVM Command Set: Supported 00:07:48.048 Boot Partition: Not Supported 00:07:48.048 Memory Page Size Minimum: 4096 bytes 00:07:48.048 Memory Page Size Maximum: 65536 bytes 00:07:48.048 Persistent Memory Region: Not Supported 00:07:48.048 Optional Asynchronous Events Supported 00:07:48.048 Namespace Attribute Notices: Supported 00:07:48.048 Firmware Activation Notices: Not Supported 00:07:48.048 ANA Change Notices: Not Supported 00:07:48.048 PLE Aggregate Log Change Notices: Not Supported 00:07:48.048 LBA Status Info Alert Notices: Not Supported 00:07:48.048 EGE Aggregate Log Change Notices: Not Supported 00:07:48.048 Normal NVM Subsystem Shutdown event: Not Supported 00:07:48.048 Zone Descriptor Change Notices: Not Supported 00:07:48.048 Discovery Log Change Notices: Not Supported 00:07:48.048 Controller Attributes 00:07:48.048 128-bit Host Identifier: Not Supported 00:07:48.048 Non-Operational Permissive Mode: Not Supported 00:07:48.048 NVM Sets: Not Supported 00:07:48.048 Read Recovery Levels: Not Supported 00:07:48.048 Endurance Groups: Supported 00:07:48.048 Predictable Latency Mode: Not Supported 00:07:48.048 Traffic Based Keep ALive: Not Supported 00:07:48.048 Namespace Granularity: Not Supported 00:07:48.048 SQ Associations: Not Supported 00:07:48.048 UUID List: Not Supported 00:07:48.049 Multi-Domain Subsystem: Not Supported 00:07:48.049 Fixed Capacity Management: Not Supported 00:07:48.049 Variable Capacity Management: Not Supported 00:07:48.049 Delete Endurance Group: Not Supported 00:07:48.049 Delete NVM Set: Not Supported 00:07:48.049 Extended LBA Formats Supported: Supported 00:07:48.049 Flexible Data Placement Supported: Supported 00:07:48.049 00:07:48.049 Controller Memory Buffer Support 00:07:48.049 ================================ 00:07:48.049 Supported: No 00:07:48.049 00:07:48.049 Persistent Memory Region Support 00:07:48.049 ================================ 00:07:48.049 Supported: No 00:07:48.049 00:07:48.049 Admin Command Set Attributes 00:07:48.049 ============================ 00:07:48.049 Security Send/Receive: Not Supported 00:07:48.049 Format NVM: Supported 00:07:48.049 Firmware Activate/Download: Not Supported 00:07:48.049 Namespace Management: Supported 00:07:48.049 Device Self-Test: Not Supported 00:07:48.049 Directives: Supported 00:07:48.049 NVMe-MI: Not Supported 00:07:48.049 Virtualization Management: Not Supported 00:07:48.049 Doorbell Buffer Config: Supported 00:07:48.049 Get LBA Status Capability: Not Supported 00:07:48.049 Command & Feature Lockdown Capability: Not Supported 00:07:48.049 Abort Command Limit: 4 00:07:48.049 Async Event Request Limit: 4 00:07:48.049 Number of Firmware Slots: N/A 00:07:48.049 Firmware Slot 1 Read-Only: N/A 00:07:48.049 Firmware Activation Without Reset: N/A 00:07:48.049 Multiple Update Detection Support: N/A 00:07:48.049 Firmware Update Granularity: No Information Provided 00:07:48.049 Per-Namespace SMART Log: Yes 00:07:48.049 Asymmetric Namespace Access Log Page: Not Supported 00:07:48.049 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:48.049 Command Effects Log Page: Supported 00:07:48.049 Get Log Page Extended Data: Supported 00:07:48.049 Telemetry Log Pages: Not Supported 00:07:48.049 Persistent Event Log Pages: Not Supported 00:07:48.049 Supported Log Pages Log Page: May Support 00:07:48.049 Commands Supported & Effects Log Page: Not Supported 00:07:48.049 Feature Identifiers & Effects Log Page:May Support 00:07:48.049 NVMe-MI Commands & Effects Log Page: May Support 00:07:48.049 Data Area 4 for Telemetry Log: Not Supported 00:07:48.049 Error Log Page Entries Supported: 1 00:07:48.049 Keep Alive: Not Supported 00:07:48.049 00:07:48.049 NVM Command Set Attributes 00:07:48.049 ========================== 00:07:48.049 Submission Queue Entry Size 00:07:48.049 Max: 64 00:07:48.049 Min: 64 00:07:48.049 Completion Queue Entry Size 00:07:48.049 Max: 16 00:07:48.049 Min: 16 00:07:48.049 Number of Namespaces: 256 00:07:48.049 Compare Command: Supported 00:07:48.049 Write Uncorrectable Command: Not Supported 00:07:48.049 Dataset Management Command: Supported 00:07:48.049 Write Zeroes Command: Supported 00:07:48.049 Set Features Save Field: Supported 00:07:48.049 Reservations: Not Supported 00:07:48.049 Timestamp: Supported 00:07:48.049 Copy: Supported 00:07:48.049 Volatile Write Cache: Present 00:07:48.049 Atomic Write Unit (Normal): 1 00:07:48.049 Atomic Write Unit (PFail): 1 00:07:48.049 Atomic Compare & Write Unit: 1 00:07:48.049 Fused Compare & Write: Not Supported 00:07:48.049 Scatter-Gather List 00:07:48.049 SGL Command Set: Supported 00:07:48.049 SGL Keyed: Not Supported 00:07:48.049 SGL Bit Bucket Descriptor: Not Supported 00:07:48.049 SGL Metadata Pointer: Not Supported 00:07:48.049 Oversized SGL: Not Supported 00:07:48.049 SGL Metadata Address: Not Supported 00:07:48.049 SGL Offset: Not Supported 00:07:48.049 Transport SGL Data Block: Not Supported 00:07:48.049 Replay Protected Memory Block: Not Supported 00:07:48.049 00:07:48.049 Firmware Slot Information 00:07:48.049 ========================= 00:07:48.049 Active slot: 1 00:07:48.049 Slot 1 Firmware Revision: 1.0 00:07:48.049 00:07:48.049 00:07:48.049 Commands Supported and Effects 00:07:48.049 ============================== 00:07:48.049 Admin Commands 00:07:48.049 -------------- 00:07:48.049 Delete I/O Submission Queue (00h): Supported 00:07:48.049 Create I/O Submission Queue (01h): Supported 00:07:48.049 Get Log Page (02h): Supported 00:07:48.049 Delete I/O Completion Queue (04h): Supported 00:07:48.049 Create I/O Completion Queue (05h): Supported 00:07:48.049 Identify (06h): Supported 00:07:48.049 Abort (08h): Supported 00:07:48.049 Set Features (09h): Supported 00:07:48.049 Get Features (0Ah): Supported 00:07:48.049 Asynchronous Event Request (0Ch): Supported 00:07:48.049 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:48.049 Directive Send (19h): Supported 00:07:48.049 Directive Receive (1Ah): Supported 00:07:48.049 Virtualization Management (1Ch): Supported 00:07:48.049 Doorbell Buffer Config (7Ch): Supported 00:07:48.049 Format NVM (80h): Supported LBA-Change 00:07:48.049 I/O Commands 00:07:48.049 ------------ 00:07:48.049 Flush (00h): Supported LBA-Change 00:07:48.049 Write (01h): Supported LBA-Change 00:07:48.049 Read (02h): Supported 00:07:48.049 Compare (05h): Supported 00:07:48.049 Write Zeroes (08h): Supported LBA-Change 00:07:48.049 Dataset Management (09h): Supported LBA-Change 00:07:48.049 Unknown (0Ch): Supported 00:07:48.049 Unknown (12h): Supported 00:07:48.049 Copy (19h): Supported LBA-Change 00:07:48.049 Unknown (1Dh): Supported LBA-Change 00:07:48.049 00:07:48.049 Error Log 00:07:48.049 ========= 00:07:48.049 00:07:48.049 Arbitration 00:07:48.049 =========== 00:07:48.049 Arbitration Burst: no limit 00:07:48.049 00:07:48.049 Power Management 00:07:48.049 ================ 00:07:48.049 Number of Power States: 1 00:07:48.049 Current Power State: Power State #0 00:07:48.049 Power State #0: 00:07:48.049 Max Power: 25.00 W 00:07:48.049 Non-Operational State: Operational 00:07:48.049 Entry Latency: 16 microseconds 00:07:48.049 Exit Latency: 4 microseconds 00:07:48.049 Relative Read Throughput: 0 00:07:48.049 Relative Read Latency: 0 00:07:48.049 Relative Write Throughput: 0 00:07:48.049 Relative Write Latency: 0 00:07:48.049 Idle Power: Not Reported 00:07:48.049 Active Power: Not Reported 00:07:48.049 Non-Operational Permissive Mode: Not Supported 00:07:48.049 00:07:48.049 Health Information 00:07:48.049 ================== 00:07:48.049 Critical Warnings: 00:07:48.049 Available Spare Space: OK 00:07:48.049 Temperature: OK 00:07:48.049 Device Reliability: OK 00:07:48.049 Read Only: No 00:07:48.049 Volatile Memory Backup: OK 00:07:48.049 Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.049 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:48.049 Available Spare: 0% 00:07:48.049 Available Spare Threshold: 0% 00:07:48.049 Life Percentage Used: 0% 00:07:48.049 Data Units Read: 835 00:07:48.049 Data Units Written: 764 00:07:48.049 Host Read Commands: 38342 00:07:48.049 Host Write Commands: 37765 00:07:48.049 Controller Busy Time: 0 minutes 00:07:48.049 Power Cycles: 0 00:07:48.049 Power On Hours: 0 hours 00:07:48.049 Unsafe Shutdowns: 0 00:07:48.049 Unrecoverable Media Errors: 0 00:07:48.049 Lifetime Error Log Entries: 0 00:07:48.049 Warning Temperature Time: 0 minutes 00:07:48.049 Critical Temperature Time: 0 minutes 00:07:48.049 00:07:48.049 Number of Queues 00:07:48.049 ================ 00:07:48.049 Number of I/O Submission Queues: 64 00:07:48.049 Number of I/O Completion Queues: 64 00:07:48.049 00:07:48.049 ZNS Specific Controller Data 00:07:48.049 ============================ 00:07:48.049 Zone Append Size Limit: 0 00:07:48.049 00:07:48.049 00:07:48.049 Active Namespaces 00:07:48.049 ================= 00:07:48.049 Namespace ID:1 00:07:48.049 Error Recovery Timeout: Unlimited 00:07:48.049 Command Set Identifier: NVM (00h) 00:07:48.049 Deallocate: Supported 00:07:48.049 Deallocated/Unwritten Error: Supported 00:07:48.049 Deallocated Read Value: All 0x00 00:07:48.049 Deallocate in Write Zeroes: Not Supported 00:07:48.049 Deallocated Guard Field: 0xFFFF 00:07:48.049 Flush: Supported 00:07:48.049 Reservation: Not Supported 00:07:48.049 Namespace Sharing Capabilities: Multiple Controllers 00:07:48.049 Size (in LBAs): 262144 (1GiB) 00:07:48.049 Capacity (in LBAs): 262144 (1GiB) 00:07:48.049 Utilization (in LBAs): 262144 (1GiB) 00:07:48.049 Thin Provisioning: Not Supported 00:07:48.049 Per-NS Atomic Units: No 00:07:48.050 Maximum Single Source Range Length: 128 00:07:48.050 Maximum Copy Length: 128 00:07:48.050 Maximum Source Range Count: 128 00:07:48.050 NGUID/EUI64 Never Reused: No 00:07:48.050 Namespace Write Protected: No 00:07:48.050 Endurance group ID: 1 00:07:48.050 Number of LBA Formats: 8 00:07:48.050 Current LBA Format: LBA Format #04 00:07:48.050 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.050 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.050 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.050 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.050 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.050 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.050 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.050 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.050 00:07:48.050 Get Feature FDP: 00:07:48.050 ================ 00:07:48.050 Enabled: Yes 00:07:48.050 FDP configuration index: 0 00:07:48.050 00:07:48.050 FDP configurations log page 00:07:48.050 =========================== 00:07:48.050 Number of FDP configurations: 1 00:07:48.050 Version: 0 00:07:48.050 Size: 112 00:07:48.050 FDP Configuration Descriptor: 0 00:07:48.050 Descriptor Size: 96 00:07:48.050 Reclaim Group Identifier format: 2 00:07:48.050 FDP Volatile Write Cache: Not Present 00:07:48.050 FDP Configuration: Valid 00:07:48.050 Vendor Specific Size: 0 00:07:48.050 Number of Reclaim Groups: 2 00:07:48.050 Number of Recalim Unit Handles: 8 00:07:48.050 Max Placement Identifiers: 128 00:07:48.050 Number of Namespaces Suppprted: 256 00:07:48.050 Reclaim unit Nominal Size: 6000000 bytes 00:07:48.050 Estimated Reclaim Unit Time Limit: Not Reported 00:07:48.050 RUH Desc #000: RUH Type: Initially Isolated 00:07:48.050 RUH Desc #001: RUH Type: Initially Isolated 00:07:48.050 RUH Desc #002: RUH Type: Initially Isolated 00:07:48.050 RUH Desc #003: RUH Type: Initially Isolated 00:07:48.050 RUH Desc #004: RUH Type: Initially Isolated 00:07:48.050 RUH Desc #005: RUH Type: Initially Isolated 00:07:48.050 RUH Desc #006: RUH Type: Initially Isolated 00:07:48.050 RUH Desc #007: RUH Type: Initially Isolated 00:07:48.050 00:07:48.050 FDP reclaim unit handle usage log page 00:07:48.050 ==================================[2024-12-05 16:54:22.403610] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62762 terminated unexpected 00:07:48.050 ==== 00:07:48.050 Number of Reclaim Unit Handles: 8 00:07:48.050 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:48.050 RUH Usage Desc #001: RUH Attributes: Unused 00:07:48.050 RUH Usage Desc #002: RUH Attributes: Unused 00:07:48.050 RUH Usage Desc #003: RUH Attributes: Unused 00:07:48.050 RUH Usage Desc #004: RUH Attributes: Unused 00:07:48.050 RUH Usage Desc #005: RUH Attributes: Unused 00:07:48.050 RUH Usage Desc #006: RUH Attributes: Unused 00:07:48.050 RUH Usage Desc #007: RUH Attributes: Unused 00:07:48.050 00:07:48.050 FDP statistics log page 00:07:48.050 ======================= 00:07:48.050 Host bytes with metadata written: 490119168 00:07:48.050 Media bytes with metadata written: 490172416 00:07:48.050 Media bytes erased: 0 00:07:48.050 00:07:48.050 FDP events log page 00:07:48.050 =================== 00:07:48.050 Number of FDP events: 0 00:07:48.050 00:07:48.050 NVM Specific Namespace Data 00:07:48.050 =========================== 00:07:48.050 Logical Block Storage Tag Mask: 0 00:07:48.050 Protection Information Capabilities: 00:07:48.050 16b Guard Protection Information Storage Tag Support: No 00:07:48.050 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.050 Storage Tag Check Read Support: No 00:07:48.050 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.050 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.050 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.050 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.050 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.050 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.050 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.050 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.050 ===================================================== 00:07:48.050 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:48.050 ===================================================== 00:07:48.050 Controller Capabilities/Features 00:07:48.050 ================================ 00:07:48.050 Vendor ID: 1b36 00:07:48.050 Subsystem Vendor ID: 1af4 00:07:48.050 Serial Number: 12340 00:07:48.050 Model Number: QEMU NVMe Ctrl 00:07:48.050 Firmware Version: 8.0.0 00:07:48.050 Recommended Arb Burst: 6 00:07:48.050 IEEE OUI Identifier: 00 54 52 00:07:48.050 Multi-path I/O 00:07:48.050 May have multiple subsystem ports: No 00:07:48.050 May have multiple controllers: No 00:07:48.050 Associated with SR-IOV VF: No 00:07:48.050 Max Data Transfer Size: 524288 00:07:48.050 Max Number of Namespaces: 256 00:07:48.050 Max Number of I/O Queues: 64 00:07:48.050 NVMe Specification Version (VS): 1.4 00:07:48.050 NVMe Specification Version (Identify): 1.4 00:07:48.050 Maximum Queue Entries: 2048 00:07:48.050 Contiguous Queues Required: Yes 00:07:48.050 Arbitration Mechanisms Supported 00:07:48.050 Weighted Round Robin: Not Supported 00:07:48.050 Vendor Specific: Not Supported 00:07:48.050 Reset Timeout: 7500 ms 00:07:48.050 Doorbell Stride: 4 bytes 00:07:48.050 NVM Subsystem Reset: Not Supported 00:07:48.050 Command Sets Supported 00:07:48.050 NVM Command Set: Supported 00:07:48.050 Boot Partition: Not Supported 00:07:48.050 Memory Page Size Minimum: 4096 bytes 00:07:48.050 Memory Page Size Maximum: 65536 bytes 00:07:48.050 Persistent Memory Region: Not Supported 00:07:48.050 Optional Asynchronous Events Supported 00:07:48.050 Namespace Attribute Notices: Supported 00:07:48.050 Firmware Activation Notices: Not Supported 00:07:48.050 ANA Change Notices: Not Supported 00:07:48.050 PLE Aggregate Log Change Notices: Not Supported 00:07:48.050 LBA Status Info Alert Notices: Not Supported 00:07:48.050 EGE Aggregate Log Change Notices: Not Supported 00:07:48.050 Normal NVM Subsystem Shutdown event: Not Supported 00:07:48.050 Zone Descriptor Change Notices: Not Supported 00:07:48.050 Discovery Log Change Notices: Not Supported 00:07:48.050 Controller Attributes 00:07:48.050 128-bit Host Identifier: Not Supported 00:07:48.050 Non-Operational Permissive Mode: Not Supported 00:07:48.050 NVM Sets: Not Supported 00:07:48.050 Read Recovery Levels: Not Supported 00:07:48.050 Endurance Groups: Not Supported 00:07:48.050 Predictable Latency Mode: Not Supported 00:07:48.050 Traffic Based Keep ALive: Not Supported 00:07:48.050 Namespace Granularity: Not Supported 00:07:48.050 SQ Associations: Not Supported 00:07:48.050 UUID List: Not Supported 00:07:48.050 Multi-Domain Subsystem: Not Supported 00:07:48.050 Fixed Capacity Management: Not Supported 00:07:48.050 Variable Capacity Management: Not Supported 00:07:48.050 Delete Endurance Group: Not Supported 00:07:48.050 Delete NVM Set: Not Supported 00:07:48.050 Extended LBA Formats Supported: Supported 00:07:48.050 Flexible Data Placement Supported: Not Supported 00:07:48.050 00:07:48.050 Controller Memory Buffer Support 00:07:48.050 ================================ 00:07:48.050 Supported: No 00:07:48.050 00:07:48.050 Persistent Memory Region Support 00:07:48.050 ================================ 00:07:48.050 Supported: No 00:07:48.050 00:07:48.050 Admin Command Set Attributes 00:07:48.050 ============================ 00:07:48.050 Security Send/Receive: Not Supported 00:07:48.050 Format NVM: Supported 00:07:48.050 Firmware Activate/Download: Not Supported 00:07:48.050 Namespace Management: Supported 00:07:48.050 Device Self-Test: Not Supported 00:07:48.050 Directives: Supported 00:07:48.050 NVMe-MI: Not Supported 00:07:48.050 Virtualization Management: Not Supported 00:07:48.050 Doorbell Buffer Config: Supported 00:07:48.050 Get LBA Status Capability: Not Supported 00:07:48.050 Command & Feature Lockdown Capability: Not Supported 00:07:48.050 Abort Command Limit: 4 00:07:48.050 Async Event Request Limit: 4 00:07:48.050 Number of Firmware Slots: N/A 00:07:48.050 Firmware Slot 1 Read-Only: N/A 00:07:48.050 Firmware Activation Without Reset: N/A 00:07:48.050 Multiple Update Detection Support: N/A 00:07:48.050 Firmware Update Granularity: No Information Provided 00:07:48.050 Per-Namespace SMART Log: Yes 00:07:48.050 Asymmetric Namespace Access Log Page: Not Supported 00:07:48.050 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:48.050 Command Effects Log Page: Supported 00:07:48.050 Get Log Page Extended Data: Supported 00:07:48.050 Telemetry Log Pages: Not Supported 00:07:48.050 Persistent Event Log Pages: Not Supported 00:07:48.051 Supported Log Pages Log Page: May Support 00:07:48.051 Commands Supported & Effects Log Page: Not Supported 00:07:48.051 Feature Identifiers & Effects Log Page:May Support 00:07:48.051 NVMe-MI Commands & Effects Log Page: May Support 00:07:48.051 Data Area 4 for Telemetry Log: Not Supported 00:07:48.051 Error Log Page Entries Supported: 1 00:07:48.051 Keep Alive: Not Supported 00:07:48.051 00:07:48.051 NVM Command Set Attributes 00:07:48.051 ========================== 00:07:48.051 Submission Queue Entry Size 00:07:48.051 Max: 64 00:07:48.051 Min: 64 00:07:48.051 Completion Queue Entry Size 00:07:48.051 Max: 16 00:07:48.051 Min: 16 00:07:48.051 Number of Namespaces: 256 00:07:48.051 Compare Command: Supported 00:07:48.051 Write Uncorrectable Command: Not Supported 00:07:48.051 Dataset Management Command: Supported 00:07:48.051 Write Zeroes Command: Supported 00:07:48.051 Set Features Save Field: Supported 00:07:48.051 Reservations: Not Supported 00:07:48.051 Timestamp: Supported 00:07:48.051 Copy: Supported 00:07:48.051 Volatile Write Cache: Present 00:07:48.051 Atomic Write Unit (Normal): 1 00:07:48.051 Atomic Write Unit (PFail): 1 00:07:48.051 Atomic Compare & Write Unit: 1 00:07:48.051 Fused Compare & Write: Not Supported 00:07:48.051 Scatter-Gather List 00:07:48.051 SGL Command Set: Supported 00:07:48.051 SGL Keyed: Not Supported 00:07:48.051 SGL Bit Bucket Descriptor: Not Supported 00:07:48.051 SGL Metadata Pointer: Not Supported 00:07:48.051 Oversized SGL: Not Supported 00:07:48.051 SGL Metadata Address: Not Supported 00:07:48.051 SGL Offset: Not Supported 00:07:48.051 Transport SGL Data Block: Not Supported 00:07:48.051 Replay Protected Memory Block: Not Supported 00:07:48.051 00:07:48.051 Firmware Slot Information 00:07:48.051 ========================= 00:07:48.051 Active slot: 1 00:07:48.051 Slot 1 Firmware Revision: 1.0 00:07:48.051 00:07:48.051 00:07:48.051 Commands Supported and Effects 00:07:48.051 ============================== 00:07:48.051 Admin Commands 00:07:48.051 -------------- 00:07:48.051 Delete I/O Submission Queue (00h): Supported 00:07:48.051 Create I/O Submission Queue (01h): Supported 00:07:48.051 Get Log Page (02h): Supported 00:07:48.051 Delete I/O Completion Queue (04h): Supported 00:07:48.051 Create I/O Completion Queue (05h): Supported 00:07:48.051 Identify (06h): Supported 00:07:48.051 Abort (08h): Supported 00:07:48.051 Set Features (09h): Supported 00:07:48.051 Get Features (0Ah): Supported 00:07:48.051 Asynchronous Event Request (0Ch): Supported 00:07:48.051 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:48.051 Directive Send (19h): Supported 00:07:48.051 Directive Receive (1Ah): Supported 00:07:48.051 Virtualization Management (1Ch): Supported 00:07:48.051 Doorbell Buffer Config (7Ch): Supported 00:07:48.051 Format NVM (80h): Supported LBA-Change 00:07:48.051 I/O Commands 00:07:48.051 ------------ 00:07:48.051 Flush (00h): Supported LBA-Change 00:07:48.051 Write (01h): Supported LBA-Change 00:07:48.051 Read (02h): Supported 00:07:48.051 Compare (05h): Supported 00:07:48.051 Write Zeroes (08h): Supported LBA-Change 00:07:48.051 Dataset Management (09h): Supported LBA-Change 00:07:48.051 Unknown (0Ch): Supported 00:07:48.051 Unknown (12h): Supported 00:07:48.051 Copy (19h): Supported LBA-Change 00:07:48.051 Unknown (1Dh): Supported LBA-Change 00:07:48.051 00:07:48.051 Error Log 00:07:48.051 ========= 00:07:48.051 00:07:48.051 Arbitration 00:07:48.051 =========== 00:07:48.051 Arbitration Burst: no limit 00:07:48.051 00:07:48.051 Power Management 00:07:48.051 ================ 00:07:48.051 Number of Power States: 1 00:07:48.051 Current Power State: Power State #0 00:07:48.051 Power State #0: 00:07:48.051 Max Power: 25.00 W 00:07:48.051 Non-Operational State: Operational 00:07:48.051 Entry Latency: 16 microseconds 00:07:48.051 Exit Latency: 4 microseconds 00:07:48.051 Relative Read Throughput: 0 00:07:48.051 Relative Read Latency: 0 00:07:48.051 Relative Write Throughput: 0 00:07:48.051 Relative Write Latency: 0 00:07:48.051 Idle Power: Not Reported 00:07:48.051 Active Power: Not Reported 00:07:48.051 Non-Operational Permissive Mode: Not Supported 00:07:48.051 00:07:48.051 Health Information 00:07:48.051 ================== 00:07:48.051 Critical Warnings: 00:07:48.051 Available Spare Space: OK 00:07:48.051 Temperature: OK 00:07:48.051 Device Reliability: OK 00:07:48.051 Read Only: No 00:07:48.051 Volatile Memory Backup: OK 00:07:48.051 Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.051 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:48.051 Available Spare: 0% 00:07:48.051 Available Spare Threshold: 0% 00:07:48.051 Life Percentage Used: 0% 00:07:48.051 Data Units Read: 670 00:07:48.051 Data Units Written: 598 00:07:48.051 Host Read Commands: 36456 00:07:48.051 Host Write Commands: 36242 00:07:48.051 Controller Busy Time: 0 minutes 00:07:48.051 Power Cycles: 0 00:07:48.051 Power On Hours: 0 hours 00:07:48.051 Unsafe Shutdowns: 0 00:07:48.051 Unrecoverable Media Errors: 0 00:07:48.051 Lifetime Error Log Entries: 0 00:07:48.051 Warning Temperature Time: 0 minutes 00:07:48.051 Critical Temperature Time: 0 minutes 00:07:48.051 00:07:48.051 Number of Queues 00:07:48.051 ================ 00:07:48.051 Number of I/O Submission Queues: 64 00:07:48.051 Number of I/O Completion Queues: 64 00:07:48.051 00:07:48.051 ZNS Specific Controller Data 00:07:48.051 ============================ 00:07:48.051 Zone Append Size Limit: 0 00:07:48.051 00:07:48.051 00:07:48.051 Active Namespaces 00:07:48.051 ================= 00:07:48.051 Namespace ID:1 00:07:48.051 Error Recovery Timeout: Unlimited 00:07:48.051 Command Set Identifier: NVM (00h) 00:07:48.051 Deallocate: Supported 00:07:48.051 Deallocated/Unwritten Error: Supported 00:07:48.051 Deallocated Read Value: All 0x00 00:07:48.051 Deallocate in Write Zeroes: Not Supported 00:07:48.051 Deallocated Guard Field: 0xFFFF 00:07:48.051 Flush: Supported 00:07:48.051 Reservation: Not Supported 00:07:48.051 Metadata Transferred as: Separate Metadata Buffer 00:07:48.051 Namespace Sharing Capabilities: Private 00:07:48.051 Size (in LBAs): 1548666 (5GiB) 00:07:48.051 Capacity (in LBAs): 1548666 (5GiB) 00:07:48.051 Utilization (in LBAs): 1548666 (5GiB) 00:07:48.051 Thin Provisioning: Not Supported 00:07:48.051 Per-NS Atomic Units: No 00:07:48.051 Maximum Single Source Range Length: 128 00:07:48.051 Maximum Copy Length: 128 00:07:48.051 Maximum Source Range Count: 128 00:07:48.051 NGUID/EUI64 Never Reused: No 00:07:48.051 Namespace Write Protected: No 00:07:48.051 Number of LBA Formats: 8 00:07:48.051 Current LBA Format: [2024-12-05 16:54:22.405415] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62762 terminated unexpected 00:07:48.051 LBA Format #07 00:07:48.051 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.051 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.051 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.051 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.051 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.051 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.051 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.051 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.051 00:07:48.051 NVM Specific Namespace Data 00:07:48.051 =========================== 00:07:48.051 Logical Block Storage Tag Mask: 0 00:07:48.051 Protection Information Capabilities: 00:07:48.051 16b Guard Protection Information Storage Tag Support: No 00:07:48.051 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.051 Storage Tag Check Read Support: No 00:07:48.051 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.051 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.051 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.051 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.051 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.051 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.051 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.051 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.051 ===================================================== 00:07:48.051 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:48.051 ===================================================== 00:07:48.051 Controller Capabilities/Features 00:07:48.051 ================================ 00:07:48.052 Vendor ID: 1b36 00:07:48.052 Subsystem Vendor ID: 1af4 00:07:48.052 Serial Number: 12341 00:07:48.052 Model Number: QEMU NVMe Ctrl 00:07:48.052 Firmware Version: 8.0.0 00:07:48.052 Recommended Arb Burst: 6 00:07:48.052 IEEE OUI Identifier: 00 54 52 00:07:48.052 Multi-path I/O 00:07:48.052 May have multiple subsystem ports: No 00:07:48.052 May have multiple controllers: No 00:07:48.052 Associated with SR-IOV VF: No 00:07:48.052 Max Data Transfer Size: 524288 00:07:48.052 Max Number of Namespaces: 256 00:07:48.052 Max Number of I/O Queues: 64 00:07:48.052 NVMe Specification Version (VS): 1.4 00:07:48.052 NVMe Specification Version (Identify): 1.4 00:07:48.052 Maximum Queue Entries: 2048 00:07:48.052 Contiguous Queues Required: Yes 00:07:48.052 Arbitration Mechanisms Supported 00:07:48.052 Weighted Round Robin: Not Supported 00:07:48.052 Vendor Specific: Not Supported 00:07:48.052 Reset Timeout: 7500 ms 00:07:48.052 Doorbell Stride: 4 bytes 00:07:48.052 NVM Subsystem Reset: Not Supported 00:07:48.052 Command Sets Supported 00:07:48.052 NVM Command Set: Supported 00:07:48.052 Boot Partition: Not Supported 00:07:48.052 Memory Page Size Minimum: 4096 bytes 00:07:48.052 Memory Page Size Maximum: 65536 bytes 00:07:48.052 Persistent Memory Region: Not Supported 00:07:48.052 Optional Asynchronous Events Supported 00:07:48.052 Namespace Attribute Notices: Supported 00:07:48.052 Firmware Activation Notices: Not Supported 00:07:48.052 ANA Change Notices: Not Supported 00:07:48.052 PLE Aggregate Log Change Notices: Not Supported 00:07:48.052 LBA Status Info Alert Notices: Not Supported 00:07:48.052 EGE Aggregate Log Change Notices: Not Supported 00:07:48.052 Normal NVM Subsystem Shutdown event: Not Supported 00:07:48.052 Zone Descriptor Change Notices: Not Supported 00:07:48.052 Discovery Log Change Notices: Not Supported 00:07:48.052 Controller Attributes 00:07:48.052 128-bit Host Identifier: Not Supported 00:07:48.052 Non-Operational Permissive Mode: Not Supported 00:07:48.052 NVM Sets: Not Supported 00:07:48.052 Read Recovery Levels: Not Supported 00:07:48.052 Endurance Groups: Not Supported 00:07:48.052 Predictable Latency Mode: Not Supported 00:07:48.052 Traffic Based Keep ALive: Not Supported 00:07:48.052 Namespace Granularity: Not Supported 00:07:48.052 SQ Associations: Not Supported 00:07:48.052 UUID List: Not Supported 00:07:48.052 Multi-Domain Subsystem: Not Supported 00:07:48.052 Fixed Capacity Management: Not Supported 00:07:48.052 Variable Capacity Management: Not Supported 00:07:48.052 Delete Endurance Group: Not Supported 00:07:48.052 Delete NVM Set: Not Supported 00:07:48.052 Extended LBA Formats Supported: Supported 00:07:48.052 Flexible Data Placement Supported: Not Supported 00:07:48.052 00:07:48.052 Controller Memory Buffer Support 00:07:48.052 ================================ 00:07:48.052 Supported: No 00:07:48.052 00:07:48.052 Persistent Memory Region Support 00:07:48.052 ================================ 00:07:48.052 Supported: No 00:07:48.052 00:07:48.052 Admin Command Set Attributes 00:07:48.052 ============================ 00:07:48.052 Security Send/Receive: Not Supported 00:07:48.052 Format NVM: Supported 00:07:48.052 Firmware Activate/Download: Not Supported 00:07:48.052 Namespace Management: Supported 00:07:48.052 Device Self-Test: Not Supported 00:07:48.052 Directives: Supported 00:07:48.052 NVMe-MI: Not Supported 00:07:48.052 Virtualization Management: Not Supported 00:07:48.052 Doorbell Buffer Config: Supported 00:07:48.052 Get LBA Status Capability: Not Supported 00:07:48.052 Command & Feature Lockdown Capability: Not Supported 00:07:48.052 Abort Command Limit: 4 00:07:48.052 Async Event Request Limit: 4 00:07:48.052 Number of Firmware Slots: N/A 00:07:48.052 Firmware Slot 1 Read-Only: N/A 00:07:48.052 Firmware Activation Without Reset: N/A 00:07:48.052 Multiple Update Detection Support: N/A 00:07:48.052 Firmware Update Granularity: No Information Provided 00:07:48.052 Per-Namespace SMART Log: Yes 00:07:48.052 Asymmetric Namespace Access Log Page: Not Supported 00:07:48.052 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:48.052 Command Effects Log Page: Supported 00:07:48.052 Get Log Page Extended Data: Supported 00:07:48.052 Telemetry Log Pages: Not Supported 00:07:48.052 Persistent Event Log Pages: Not Supported 00:07:48.052 Supported Log Pages Log Page: May Support 00:07:48.052 Commands Supported & Effects Log Page: Not Supported 00:07:48.052 Feature Identifiers & Effects Log Page:May Support 00:07:48.052 NVMe-MI Commands & Effects Log Page: May Support 00:07:48.052 Data Area 4 for Telemetry Log: Not Supported 00:07:48.052 Error Log Page Entries Supported: 1 00:07:48.052 Keep Alive: Not Supported 00:07:48.052 00:07:48.052 NVM Command Set Attributes 00:07:48.052 ========================== 00:07:48.052 Submission Queue Entry Size 00:07:48.052 Max: 64 00:07:48.052 Min: 64 00:07:48.052 Completion Queue Entry Size 00:07:48.052 Max: 16 00:07:48.052 Min: 16 00:07:48.052 Number of Namespaces: 256 00:07:48.052 Compare Command: Supported 00:07:48.052 Write Uncorrectable Command: Not Supported 00:07:48.052 Dataset Management Command: Supported 00:07:48.052 Write Zeroes Command: Supported 00:07:48.052 Set Features Save Field: Supported 00:07:48.052 Reservations: Not Supported 00:07:48.052 Timestamp: Supported 00:07:48.052 Copy: Supported 00:07:48.052 Volatile Write Cache: Present 00:07:48.052 Atomic Write Unit (Normal): 1 00:07:48.052 Atomic Write Unit (PFail): 1 00:07:48.052 Atomic Compare & Write Unit: 1 00:07:48.052 Fused Compare & Write: Not Supported 00:07:48.052 Scatter-Gather List 00:07:48.052 SGL Command Set: Supported 00:07:48.052 SGL Keyed: Not Supported 00:07:48.052 SGL Bit Bucket Descriptor: Not Supported 00:07:48.052 SGL Metadata Pointer: Not Supported 00:07:48.052 Oversized SGL: Not Supported 00:07:48.052 SGL Metadata Address: Not Supported 00:07:48.052 SGL Offset: Not Supported 00:07:48.052 Transport SGL Data Block: Not Supported 00:07:48.052 Replay Protected Memory Block: Not Supported 00:07:48.052 00:07:48.052 Firmware Slot Information 00:07:48.052 ========================= 00:07:48.052 Active slot: 1 00:07:48.052 Slot 1 Firmware Revision: 1.0 00:07:48.052 00:07:48.052 00:07:48.052 Commands Supported and Effects 00:07:48.052 ============================== 00:07:48.052 Admin Commands 00:07:48.052 -------------- 00:07:48.052 Delete I/O Submission Queue (00h): Supported 00:07:48.052 Create I/O Submission Queue (01h): Supported 00:07:48.052 Get Log Page (02h): Supported 00:07:48.052 Delete I/O Completion Queue (04h): Supported 00:07:48.052 Create I/O Completion Queue (05h): Supported 00:07:48.052 Identify (06h): Supported 00:07:48.052 Abort (08h): Supported 00:07:48.052 Set Features (09h): Supported 00:07:48.052 Get Features (0Ah): Supported 00:07:48.052 Asynchronous Event Request (0Ch): Supported 00:07:48.052 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:48.052 Directive Send (19h): Supported 00:07:48.052 Directive Receive (1Ah): Supported 00:07:48.052 Virtualization Management (1Ch): Supported 00:07:48.052 Doorbell Buffer Config (7Ch): Supported 00:07:48.052 Format NVM (80h): Supported LBA-Change 00:07:48.052 I/O Commands 00:07:48.052 ------------ 00:07:48.052 Flush (00h): Supported LBA-Change 00:07:48.052 Write (01h): Supported LBA-Change 00:07:48.052 Read (02h): Supported 00:07:48.052 Compare (05h): Supported 00:07:48.052 Write Zeroes (08h): Supported LBA-Change 00:07:48.052 Dataset Management (09h): Supported LBA-Change 00:07:48.052 Unknown (0Ch): Supported 00:07:48.052 Unknown (12h): Supported 00:07:48.052 Copy (19h): Supported LBA-Change 00:07:48.052 Unknown (1Dh): Supported LBA-Change 00:07:48.052 00:07:48.052 Error Log 00:07:48.052 ========= 00:07:48.052 00:07:48.052 Arbitration 00:07:48.052 =========== 00:07:48.052 Arbitration Burst: no limit 00:07:48.052 00:07:48.052 Power Management 00:07:48.052 ================ 00:07:48.052 Number of Power States: 1 00:07:48.052 Current Power State: Power State #0 00:07:48.052 Power State #0: 00:07:48.052 Max Power: 25.00 W 00:07:48.052 Non-Operational State: Operational 00:07:48.052 Entry Latency: 16 microseconds 00:07:48.052 Exit Latency: 4 microseconds 00:07:48.052 Relative Read Throughput: 0 00:07:48.052 Relative Read Latency: 0 00:07:48.052 Relative Write Throughput: 0 00:07:48.052 Relative Write Latency: 0 00:07:48.052 Idle Power: Not Reported 00:07:48.052 Active Power: Not Reported 00:07:48.052 Non-Operational Permissive Mode: Not Supported 00:07:48.052 00:07:48.053 Health Information 00:07:48.053 ================== 00:07:48.053 Critical Warnings: 00:07:48.053 Available Spare Space: OK 00:07:48.053 Temperature: OK 00:07:48.053 Device Reliability: OK 00:07:48.053 Read Only: No 00:07:48.053 Volatile Memory Backup: OK 00:07:48.053 Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.053 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:48.053 Available Spare: 0% 00:07:48.053 Available Spare Threshold: 0% 00:07:48.053 Life Percentage Used: 0% 00:07:48.053 Data Units Read: 1062 00:07:48.053 Data Units Written: 928 00:07:48.053 Host Read Commands: 55149 00:07:48.053 Host Write Commands: 53941 00:07:48.053 Controller Busy Time: 0 minutes 00:07:48.053 Power Cycles: 0 00:07:48.053 Power On Hours: 0 hours 00:07:48.053 Unsafe Shutdowns: 0 00:07:48.053 Unrecoverable Media Errors: 0 00:07:48.053 Lifetime Error Log Entries: 0 00:07:48.053 Warning Temperature Time: 0 minutes 00:07:48.053 Critical Temperature Time: 0 minutes 00:07:48.053 00:07:48.053 Number of Queues 00:07:48.053 ================ 00:07:48.053 Number of I/O Submission Queues: 64 00:07:48.053 Number of I/O Completion Queues: 64 00:07:48.053 00:07:48.053 ZNS Specific Controller Data 00:07:48.053 ============================ 00:07:48.053 Zone Append Size Limit: 0 00:07:48.053 00:07:48.053 00:07:48.053 Active Namespaces 00:07:48.053 ================= 00:07:48.053 Namespace ID:1 00:07:48.053 Error Recovery Timeout: Unlimited 00:07:48.053 Command Set Identifier: NVM (00h) 00:07:48.053 Deallocate: Supported 00:07:48.053 Deallocated/Unwritten Error: Supported 00:07:48.053 Deallocated Read Value: All 0x00 00:07:48.053 Deallocate in Write Zeroes: Not Supported 00:07:48.053 Deallocated Guard Field: 0xFFFF 00:07:48.053 Flush: Supported 00:07:48.053 Reservation: Not Supported 00:07:48.053 Namespace Sharing Capabilities: Private 00:07:48.053 Size (in LBAs): 1310720 (5GiB) 00:07:48.053 Capacity (in LBAs): 1310720 (5GiB) 00:07:48.053 Utilization (in LBAs): 1310720 (5GiB) 00:07:48.053 Thin Provisioning: Not Supported 00:07:48.053 Per-NS Atomic Units: No 00:07:48.053 Maximum Single Source Range Length: 128 00:07:48.053 Maximum Copy Length: 128 00:07:48.053 Maximum Source Range Count: 128 00:07:48.053 NGUID/EUI64 Never Reused: No 00:07:48.053 Namespace Write Protected: No 00:07:48.053 Number of LBA Formats: 8 00:07:48.053 Current LBA Format: LBA Format #04 00:07:48.053 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.053 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.053 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.053 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.053 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.053 LBA Forma[2024-12-05 16:54:22.407181] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62762 terminated unexpected 00:07:48.053 t #05: Data Size: 4096 Metadata Size: 8 00:07:48.053 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.053 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.053 00:07:48.053 NVM Specific Namespace Data 00:07:48.053 =========================== 00:07:48.053 Logical Block Storage Tag Mask: 0 00:07:48.053 Protection Information Capabilities: 00:07:48.053 16b Guard Protection Information Storage Tag Support: No 00:07:48.053 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.053 Storage Tag Check Read Support: No 00:07:48.053 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.053 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.053 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.053 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.053 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.053 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.053 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.053 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.053 ===================================================== 00:07:48.053 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:48.053 ===================================================== 00:07:48.053 Controller Capabilities/Features 00:07:48.053 ================================ 00:07:48.053 Vendor ID: 1b36 00:07:48.053 Subsystem Vendor ID: 1af4 00:07:48.053 Serial Number: 12342 00:07:48.053 Model Number: QEMU NVMe Ctrl 00:07:48.053 Firmware Version: 8.0.0 00:07:48.053 Recommended Arb Burst: 6 00:07:48.053 IEEE OUI Identifier: 00 54 52 00:07:48.053 Multi-path I/O 00:07:48.053 May have multiple subsystem ports: No 00:07:48.053 May have multiple controllers: No 00:07:48.053 Associated with SR-IOV VF: No 00:07:48.053 Max Data Transfer Size: 524288 00:07:48.053 Max Number of Namespaces: 256 00:07:48.053 Max Number of I/O Queues: 64 00:07:48.053 NVMe Specification Version (VS): 1.4 00:07:48.053 NVMe Specification Version (Identify): 1.4 00:07:48.053 Maximum Queue Entries: 2048 00:07:48.053 Contiguous Queues Required: Yes 00:07:48.053 Arbitration Mechanisms Supported 00:07:48.053 Weighted Round Robin: Not Supported 00:07:48.053 Vendor Specific: Not Supported 00:07:48.053 Reset Timeout: 7500 ms 00:07:48.053 Doorbell Stride: 4 bytes 00:07:48.053 NVM Subsystem Reset: Not Supported 00:07:48.053 Command Sets Supported 00:07:48.053 NVM Command Set: Supported 00:07:48.053 Boot Partition: Not Supported 00:07:48.053 Memory Page Size Minimum: 4096 bytes 00:07:48.053 Memory Page Size Maximum: 65536 bytes 00:07:48.053 Persistent Memory Region: Not Supported 00:07:48.053 Optional Asynchronous Events Supported 00:07:48.053 Namespace Attribute Notices: Supported 00:07:48.053 Firmware Activation Notices: Not Supported 00:07:48.053 ANA Change Notices: Not Supported 00:07:48.053 PLE Aggregate Log Change Notices: Not Supported 00:07:48.053 LBA Status Info Alert Notices: Not Supported 00:07:48.053 EGE Aggregate Log Change Notices: Not Supported 00:07:48.053 Normal NVM Subsystem Shutdown event: Not Supported 00:07:48.053 Zone Descriptor Change Notices: Not Supported 00:07:48.053 Discovery Log Change Notices: Not Supported 00:07:48.053 Controller Attributes 00:07:48.053 128-bit Host Identifier: Not Supported 00:07:48.053 Non-Operational Permissive Mode: Not Supported 00:07:48.053 NVM Sets: Not Supported 00:07:48.053 Read Recovery Levels: Not Supported 00:07:48.053 Endurance Groups: Not Supported 00:07:48.053 Predictable Latency Mode: Not Supported 00:07:48.053 Traffic Based Keep ALive: Not Supported 00:07:48.053 Namespace Granularity: Not Supported 00:07:48.053 SQ Associations: Not Supported 00:07:48.053 UUID List: Not Supported 00:07:48.053 Multi-Domain Subsystem: Not Supported 00:07:48.053 Fixed Capacity Management: Not Supported 00:07:48.053 Variable Capacity Management: Not Supported 00:07:48.053 Delete Endurance Group: Not Supported 00:07:48.053 Delete NVM Set: Not Supported 00:07:48.053 Extended LBA Formats Supported: Supported 00:07:48.053 Flexible Data Placement Supported: Not Supported 00:07:48.053 00:07:48.053 Controller Memory Buffer Support 00:07:48.053 ================================ 00:07:48.053 Supported: No 00:07:48.053 00:07:48.054 Persistent Memory Region Support 00:07:48.054 ================================ 00:07:48.054 Supported: No 00:07:48.054 00:07:48.054 Admin Command Set Attributes 00:07:48.054 ============================ 00:07:48.054 Security Send/Receive: Not Supported 00:07:48.054 Format NVM: Supported 00:07:48.054 Firmware Activate/Download: Not Supported 00:07:48.054 Namespace Management: Supported 00:07:48.054 Device Self-Test: Not Supported 00:07:48.054 Directives: Supported 00:07:48.054 NVMe-MI: Not Supported 00:07:48.054 Virtualization Management: Not Supported 00:07:48.054 Doorbell Buffer Config: Supported 00:07:48.054 Get LBA Status Capability: Not Supported 00:07:48.054 Command & Feature Lockdown Capability: Not Supported 00:07:48.054 Abort Command Limit: 4 00:07:48.054 Async Event Request Limit: 4 00:07:48.054 Number of Firmware Slots: N/A 00:07:48.054 Firmware Slot 1 Read-Only: N/A 00:07:48.054 Firmware Activation Without Reset: N/A 00:07:48.054 Multiple Update Detection Support: N/A 00:07:48.054 Firmware Update Granularity: No Information Provided 00:07:48.054 Per-Namespace SMART Log: Yes 00:07:48.054 Asymmetric Namespace Access Log Page: Not Supported 00:07:48.054 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:48.054 Command Effects Log Page: Supported 00:07:48.054 Get Log Page Extended Data: Supported 00:07:48.054 Telemetry Log Pages: Not Supported 00:07:48.054 Persistent Event Log Pages: Not Supported 00:07:48.054 Supported Log Pages Log Page: May Support 00:07:48.054 Commands Supported & Effects Log Page: Not Supported 00:07:48.054 Feature Identifiers & Effects Log Page:May Support 00:07:48.054 NVMe-MI Commands & Effects Log Page: May Support 00:07:48.054 Data Area 4 for Telemetry Log: Not Supported 00:07:48.054 Error Log Page Entries Supported: 1 00:07:48.054 Keep Alive: Not Supported 00:07:48.054 00:07:48.054 NVM Command Set Attributes 00:07:48.054 ========================== 00:07:48.054 Submission Queue Entry Size 00:07:48.054 Max: 64 00:07:48.054 Min: 64 00:07:48.054 Completion Queue Entry Size 00:07:48.054 Max: 16 00:07:48.054 Min: 16 00:07:48.054 Number of Namespaces: 256 00:07:48.054 Compare Command: Supported 00:07:48.054 Write Uncorrectable Command: Not Supported 00:07:48.054 Dataset Management Command: Supported 00:07:48.054 Write Zeroes Command: Supported 00:07:48.054 Set Features Save Field: Supported 00:07:48.054 Reservations: Not Supported 00:07:48.054 Timestamp: Supported 00:07:48.054 Copy: Supported 00:07:48.054 Volatile Write Cache: Present 00:07:48.054 Atomic Write Unit (Normal): 1 00:07:48.054 Atomic Write Unit (PFail): 1 00:07:48.054 Atomic Compare & Write Unit: 1 00:07:48.054 Fused Compare & Write: Not Supported 00:07:48.054 Scatter-Gather List 00:07:48.054 SGL Command Set: Supported 00:07:48.054 SGL Keyed: Not Supported 00:07:48.054 SGL Bit Bucket Descriptor: Not Supported 00:07:48.054 SGL Metadata Pointer: Not Supported 00:07:48.054 Oversized SGL: Not Supported 00:07:48.054 SGL Metadata Address: Not Supported 00:07:48.054 SGL Offset: Not Supported 00:07:48.054 Transport SGL Data Block: Not Supported 00:07:48.054 Replay Protected Memory Block: Not Supported 00:07:48.054 00:07:48.054 Firmware Slot Information 00:07:48.054 ========================= 00:07:48.054 Active slot: 1 00:07:48.054 Slot 1 Firmware Revision: 1.0 00:07:48.054 00:07:48.054 00:07:48.054 Commands Supported and Effects 00:07:48.054 ============================== 00:07:48.054 Admin Commands 00:07:48.054 -------------- 00:07:48.054 Delete I/O Submission Queue (00h): Supported 00:07:48.054 Create I/O Submission Queue (01h): Supported 00:07:48.054 Get Log Page (02h): Supported 00:07:48.054 Delete I/O Completion Queue (04h): Supported 00:07:48.054 Create I/O Completion Queue (05h): Supported 00:07:48.054 Identify (06h): Supported 00:07:48.054 Abort (08h): Supported 00:07:48.054 Set Features (09h): Supported 00:07:48.054 Get Features (0Ah): Supported 00:07:48.054 Asynchronous Event Request (0Ch): Supported 00:07:48.054 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:48.054 Directive Send (19h): Supported 00:07:48.054 Directive Receive (1Ah): Supported 00:07:48.054 Virtualization Management (1Ch): Supported 00:07:48.054 Doorbell Buffer Config (7Ch): Supported 00:07:48.054 Format NVM (80h): Supported LBA-Change 00:07:48.054 I/O Commands 00:07:48.054 ------------ 00:07:48.054 Flush (00h): Supported LBA-Change 00:07:48.054 Write (01h): Supported LBA-Change 00:07:48.054 Read (02h): Supported 00:07:48.054 Compare (05h): Supported 00:07:48.054 Write Zeroes (08h): Supported LBA-Change 00:07:48.054 Dataset Management (09h): Supported LBA-Change 00:07:48.054 Unknown (0Ch): Supported 00:07:48.054 Unknown (12h): Supported 00:07:48.054 Copy (19h): Supported LBA-Change 00:07:48.054 Unknown (1Dh): Supported LBA-Change 00:07:48.054 00:07:48.054 Error Log 00:07:48.054 ========= 00:07:48.054 00:07:48.054 Arbitration 00:07:48.054 =========== 00:07:48.054 Arbitration Burst: no limit 00:07:48.054 00:07:48.054 Power Management 00:07:48.054 ================ 00:07:48.054 Number of Power States: 1 00:07:48.054 Current Power State: Power State #0 00:07:48.054 Power State #0: 00:07:48.054 Max Power: 25.00 W 00:07:48.054 Non-Operational State: Operational 00:07:48.054 Entry Latency: 16 microseconds 00:07:48.054 Exit Latency: 4 microseconds 00:07:48.054 Relative Read Throughput: 0 00:07:48.054 Relative Read Latency: 0 00:07:48.054 Relative Write Throughput: 0 00:07:48.054 Relative Write Latency: 0 00:07:48.054 Idle Power: Not Reported 00:07:48.054 Active Power: Not Reported 00:07:48.054 Non-Operational Permissive Mode: Not Supported 00:07:48.054 00:07:48.054 Health Information 00:07:48.054 ================== 00:07:48.054 Critical Warnings: 00:07:48.054 Available Spare Space: OK 00:07:48.054 Temperature: OK 00:07:48.054 Device Reliability: OK 00:07:48.054 Read Only: No 00:07:48.054 Volatile Memory Backup: OK 00:07:48.054 Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.054 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:48.054 Available Spare: 0% 00:07:48.054 Available Spare Threshold: 0% 00:07:48.054 Life Percentage Used: 0% 00:07:48.054 Data Units Read: 2204 00:07:48.054 Data Units Written: 1991 00:07:48.054 Host Read Commands: 111776 00:07:48.054 Host Write Commands: 110045 00:07:48.054 Controller Busy Time: 0 minutes 00:07:48.054 Power Cycles: 0 00:07:48.054 Power On Hours: 0 hours 00:07:48.054 Unsafe Shutdowns: 0 00:07:48.054 Unrecoverable Media Errors: 0 00:07:48.054 Lifetime Error Log Entries: 0 00:07:48.054 Warning Temperature Time: 0 minutes 00:07:48.054 Critical Temperature Time: 0 minutes 00:07:48.054 00:07:48.054 Number of Queues 00:07:48.054 ================ 00:07:48.054 Number of I/O Submission Queues: 64 00:07:48.054 Number of I/O Completion Queues: 64 00:07:48.054 00:07:48.054 ZNS Specific Controller Data 00:07:48.054 ============================ 00:07:48.054 Zone Append Size Limit: 0 00:07:48.054 00:07:48.054 00:07:48.054 Active Namespaces 00:07:48.054 ================= 00:07:48.054 Namespace ID:1 00:07:48.054 Error Recovery Timeout: Unlimited 00:07:48.054 Command Set Identifier: NVM (00h) 00:07:48.054 Deallocate: Supported 00:07:48.054 Deallocated/Unwritten Error: Supported 00:07:48.054 Deallocated Read Value: All 0x00 00:07:48.054 Deallocate in Write Zeroes: Not Supported 00:07:48.054 Deallocated Guard Field: 0xFFFF 00:07:48.054 Flush: Supported 00:07:48.054 Reservation: Not Supported 00:07:48.054 Namespace Sharing Capabilities: Private 00:07:48.054 Size (in LBAs): 1048576 (4GiB) 00:07:48.054 Capacity (in LBAs): 1048576 (4GiB) 00:07:48.054 Utilization (in LBAs): 1048576 (4GiB) 00:07:48.054 Thin Provisioning: Not Supported 00:07:48.054 Per-NS Atomic Units: No 00:07:48.054 Maximum Single Source Range Length: 128 00:07:48.054 Maximum Copy Length: 128 00:07:48.054 Maximum Source Range Count: 128 00:07:48.054 NGUID/EUI64 Never Reused: No 00:07:48.054 Namespace Write Protected: No 00:07:48.054 Number of LBA Formats: 8 00:07:48.054 Current LBA Format: LBA Format #04 00:07:48.054 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.054 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.054 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.054 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.054 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.054 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.054 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.054 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.054 00:07:48.054 NVM Specific Namespace Data 00:07:48.055 =========================== 00:07:48.055 Logical Block Storage Tag Mask: 0 00:07:48.055 Protection Information Capabilities: 00:07:48.055 16b Guard Protection Information Storage Tag Support: No 00:07:48.055 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.055 Storage Tag Check Read Support: No 00:07:48.055 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.055 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.055 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.055 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.055 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.055 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.055 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.055 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.055 Namespace ID:2 00:07:48.055 Error Recovery Timeout: Unlimited 00:07:48.055 Command Set Identifier: NVM (00h) 00:07:48.055 Deallocate: Supported 00:07:48.055 Deallocated/Unwritten Error: Supported 00:07:48.055 Deallocated Read Value: All 0x00 00:07:48.055 Deallocate in Write Zeroes: Not Supported 00:07:48.055 Deallocated Guard Field: 0xFFFF 00:07:48.055 Flush: Supported 00:07:48.055 Reservation: Not Supported 00:07:48.055 Namespace Sharing Capabilities: Private 00:07:48.055 Size (in LBAs): 1048576 (4GiB) 00:07:48.055 Capacity (in LBAs): 1048576 (4GiB) 00:07:48.055 Utilization (in LBAs): 1048576 (4GiB) 00:07:48.055 Thin Provisioning: Not Supported 00:07:48.055 Per-NS Atomic Units: No 00:07:48.055 Maximum Single Source Range Length: 128 00:07:48.055 Maximum Copy Length: 128 00:07:48.055 Maximum Source Range Count: 128 00:07:48.055 NGUID/EUI64 Never Reused: No 00:07:48.055 Namespace Write Protected: No 00:07:48.055 Number of LBA Formats: 8 00:07:48.055 Current LBA Format: LBA Format #04 00:07:48.055 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.055 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.055 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.055 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.055 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.055 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.055 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.055 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.055 00:07:48.055 NVM Specific Namespace Data 00:07:48.055 =========================== 00:07:48.055 Logical Block Storage Tag Mask: 0 00:07:48.055 Protection Information Capabilities: 00:07:48.055 16b Guard Protection Information Storage Tag Support: No 00:07:48.055 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.055 Storage Tag Check Read Support: No 00:07:48.055 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.055 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.055 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.055 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.055 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.055 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.055 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.055 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.055 Namespace ID:3 00:07:48.055 Error Recovery Timeout: Unlimited 00:07:48.055 Command Set Identifier: NVM (00h) 00:07:48.055 Deallocate: Supported 00:07:48.055 Deallocated/Unwritten Error: Supported 00:07:48.055 Deallocated Read Value: All 0x00 00:07:48.055 Deallocate in Write Zeroes: Not Supported 00:07:48.055 Deallocated Guard Field: 0xFFFF 00:07:48.055 Flush: Supported 00:07:48.055 Reservation: Not Supported 00:07:48.055 Namespace Sharing Capabilities: Private 00:07:48.055 Size (in LBAs): 1048576 (4GiB) 00:07:48.318 Capacity (in LBAs): 1048576 (4GiB) 00:07:48.318 Utilization (in LBAs): 1048576 (4GiB) 00:07:48.318 Thin Provisioning: Not Supported 00:07:48.318 Per-NS Atomic Units: No 00:07:48.318 Maximum Single Source Range Length: 128 00:07:48.318 Maximum Copy Length: 128 00:07:48.318 Maximum Source Range Count: 128 00:07:48.319 NGUID/EUI64 Never Reused: No 00:07:48.319 Namespace Write Protected: No 00:07:48.319 Number of LBA Formats: 8 00:07:48.319 Current LBA Format: LBA Format #04 00:07:48.319 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.319 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.319 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.319 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.319 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.319 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.319 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.319 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.319 00:07:48.319 NVM Specific Namespace Data 00:07:48.319 =========================== 00:07:48.319 Logical Block Storage Tag Mask: 0 00:07:48.319 Protection Information Capabilities: 00:07:48.319 16b Guard Protection Information Storage Tag Support: No 00:07:48.319 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.319 Storage Tag Check Read Support: No 00:07:48.319 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.319 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.319 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.319 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.319 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.319 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.319 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.319 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.319 16:54:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:48.319 16:54:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:48.319 ===================================================== 00:07:48.319 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:48.319 ===================================================== 00:07:48.319 Controller Capabilities/Features 00:07:48.319 ================================ 00:07:48.319 Vendor ID: 1b36 00:07:48.319 Subsystem Vendor ID: 1af4 00:07:48.319 Serial Number: 12340 00:07:48.319 Model Number: QEMU NVMe Ctrl 00:07:48.319 Firmware Version: 8.0.0 00:07:48.319 Recommended Arb Burst: 6 00:07:48.319 IEEE OUI Identifier: 00 54 52 00:07:48.319 Multi-path I/O 00:07:48.319 May have multiple subsystem ports: No 00:07:48.319 May have multiple controllers: No 00:07:48.319 Associated with SR-IOV VF: No 00:07:48.319 Max Data Transfer Size: 524288 00:07:48.319 Max Number of Namespaces: 256 00:07:48.319 Max Number of I/O Queues: 64 00:07:48.319 NVMe Specification Version (VS): 1.4 00:07:48.319 NVMe Specification Version (Identify): 1.4 00:07:48.319 Maximum Queue Entries: 2048 00:07:48.319 Contiguous Queues Required: Yes 00:07:48.319 Arbitration Mechanisms Supported 00:07:48.319 Weighted Round Robin: Not Supported 00:07:48.319 Vendor Specific: Not Supported 00:07:48.319 Reset Timeout: 7500 ms 00:07:48.319 Doorbell Stride: 4 bytes 00:07:48.319 NVM Subsystem Reset: Not Supported 00:07:48.319 Command Sets Supported 00:07:48.319 NVM Command Set: Supported 00:07:48.319 Boot Partition: Not Supported 00:07:48.319 Memory Page Size Minimum: 4096 bytes 00:07:48.319 Memory Page Size Maximum: 65536 bytes 00:07:48.319 Persistent Memory Region: Not Supported 00:07:48.319 Optional Asynchronous Events Supported 00:07:48.319 Namespace Attribute Notices: Supported 00:07:48.319 Firmware Activation Notices: Not Supported 00:07:48.319 ANA Change Notices: Not Supported 00:07:48.319 PLE Aggregate Log Change Notices: Not Supported 00:07:48.319 LBA Status Info Alert Notices: Not Supported 00:07:48.319 EGE Aggregate Log Change Notices: Not Supported 00:07:48.319 Normal NVM Subsystem Shutdown event: Not Supported 00:07:48.319 Zone Descriptor Change Notices: Not Supported 00:07:48.319 Discovery Log Change Notices: Not Supported 00:07:48.319 Controller Attributes 00:07:48.319 128-bit Host Identifier: Not Supported 00:07:48.319 Non-Operational Permissive Mode: Not Supported 00:07:48.319 NVM Sets: Not Supported 00:07:48.319 Read Recovery Levels: Not Supported 00:07:48.319 Endurance Groups: Not Supported 00:07:48.319 Predictable Latency Mode: Not Supported 00:07:48.319 Traffic Based Keep ALive: Not Supported 00:07:48.319 Namespace Granularity: Not Supported 00:07:48.319 SQ Associations: Not Supported 00:07:48.319 UUID List: Not Supported 00:07:48.319 Multi-Domain Subsystem: Not Supported 00:07:48.319 Fixed Capacity Management: Not Supported 00:07:48.319 Variable Capacity Management: Not Supported 00:07:48.319 Delete Endurance Group: Not Supported 00:07:48.319 Delete NVM Set: Not Supported 00:07:48.319 Extended LBA Formats Supported: Supported 00:07:48.319 Flexible Data Placement Supported: Not Supported 00:07:48.319 00:07:48.319 Controller Memory Buffer Support 00:07:48.319 ================================ 00:07:48.319 Supported: No 00:07:48.319 00:07:48.319 Persistent Memory Region Support 00:07:48.319 ================================ 00:07:48.319 Supported: No 00:07:48.319 00:07:48.319 Admin Command Set Attributes 00:07:48.319 ============================ 00:07:48.319 Security Send/Receive: Not Supported 00:07:48.319 Format NVM: Supported 00:07:48.319 Firmware Activate/Download: Not Supported 00:07:48.319 Namespace Management: Supported 00:07:48.319 Device Self-Test: Not Supported 00:07:48.319 Directives: Supported 00:07:48.319 NVMe-MI: Not Supported 00:07:48.319 Virtualization Management: Not Supported 00:07:48.319 Doorbell Buffer Config: Supported 00:07:48.319 Get LBA Status Capability: Not Supported 00:07:48.319 Command & Feature Lockdown Capability: Not Supported 00:07:48.319 Abort Command Limit: 4 00:07:48.319 Async Event Request Limit: 4 00:07:48.319 Number of Firmware Slots: N/A 00:07:48.319 Firmware Slot 1 Read-Only: N/A 00:07:48.319 Firmware Activation Without Reset: N/A 00:07:48.319 Multiple Update Detection Support: N/A 00:07:48.319 Firmware Update Granularity: No Information Provided 00:07:48.319 Per-Namespace SMART Log: Yes 00:07:48.319 Asymmetric Namespace Access Log Page: Not Supported 00:07:48.319 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:48.319 Command Effects Log Page: Supported 00:07:48.319 Get Log Page Extended Data: Supported 00:07:48.319 Telemetry Log Pages: Not Supported 00:07:48.319 Persistent Event Log Pages: Not Supported 00:07:48.319 Supported Log Pages Log Page: May Support 00:07:48.319 Commands Supported & Effects Log Page: Not Supported 00:07:48.319 Feature Identifiers & Effects Log Page:May Support 00:07:48.319 NVMe-MI Commands & Effects Log Page: May Support 00:07:48.319 Data Area 4 for Telemetry Log: Not Supported 00:07:48.319 Error Log Page Entries Supported: 1 00:07:48.319 Keep Alive: Not Supported 00:07:48.319 00:07:48.319 NVM Command Set Attributes 00:07:48.319 ========================== 00:07:48.319 Submission Queue Entry Size 00:07:48.319 Max: 64 00:07:48.319 Min: 64 00:07:48.319 Completion Queue Entry Size 00:07:48.319 Max: 16 00:07:48.319 Min: 16 00:07:48.319 Number of Namespaces: 256 00:07:48.319 Compare Command: Supported 00:07:48.319 Write Uncorrectable Command: Not Supported 00:07:48.319 Dataset Management Command: Supported 00:07:48.319 Write Zeroes Command: Supported 00:07:48.319 Set Features Save Field: Supported 00:07:48.319 Reservations: Not Supported 00:07:48.319 Timestamp: Supported 00:07:48.319 Copy: Supported 00:07:48.319 Volatile Write Cache: Present 00:07:48.319 Atomic Write Unit (Normal): 1 00:07:48.319 Atomic Write Unit (PFail): 1 00:07:48.319 Atomic Compare & Write Unit: 1 00:07:48.319 Fused Compare & Write: Not Supported 00:07:48.319 Scatter-Gather List 00:07:48.319 SGL Command Set: Supported 00:07:48.319 SGL Keyed: Not Supported 00:07:48.319 SGL Bit Bucket Descriptor: Not Supported 00:07:48.319 SGL Metadata Pointer: Not Supported 00:07:48.319 Oversized SGL: Not Supported 00:07:48.319 SGL Metadata Address: Not Supported 00:07:48.319 SGL Offset: Not Supported 00:07:48.319 Transport SGL Data Block: Not Supported 00:07:48.319 Replay Protected Memory Block: Not Supported 00:07:48.319 00:07:48.319 Firmware Slot Information 00:07:48.319 ========================= 00:07:48.319 Active slot: 1 00:07:48.319 Slot 1 Firmware Revision: 1.0 00:07:48.319 00:07:48.319 00:07:48.319 Commands Supported and Effects 00:07:48.319 ============================== 00:07:48.319 Admin Commands 00:07:48.319 -------------- 00:07:48.319 Delete I/O Submission Queue (00h): Supported 00:07:48.319 Create I/O Submission Queue (01h): Supported 00:07:48.319 Get Log Page (02h): Supported 00:07:48.319 Delete I/O Completion Queue (04h): Supported 00:07:48.319 Create I/O Completion Queue (05h): Supported 00:07:48.319 Identify (06h): Supported 00:07:48.319 Abort (08h): Supported 00:07:48.319 Set Features (09h): Supported 00:07:48.319 Get Features (0Ah): Supported 00:07:48.320 Asynchronous Event Request (0Ch): Supported 00:07:48.320 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:48.320 Directive Send (19h): Supported 00:07:48.320 Directive Receive (1Ah): Supported 00:07:48.320 Virtualization Management (1Ch): Supported 00:07:48.320 Doorbell Buffer Config (7Ch): Supported 00:07:48.320 Format NVM (80h): Supported LBA-Change 00:07:48.320 I/O Commands 00:07:48.320 ------------ 00:07:48.320 Flush (00h): Supported LBA-Change 00:07:48.320 Write (01h): Supported LBA-Change 00:07:48.320 Read (02h): Supported 00:07:48.320 Compare (05h): Supported 00:07:48.320 Write Zeroes (08h): Supported LBA-Change 00:07:48.320 Dataset Management (09h): Supported LBA-Change 00:07:48.320 Unknown (0Ch): Supported 00:07:48.320 Unknown (12h): Supported 00:07:48.320 Copy (19h): Supported LBA-Change 00:07:48.320 Unknown (1Dh): Supported LBA-Change 00:07:48.320 00:07:48.320 Error Log 00:07:48.320 ========= 00:07:48.320 00:07:48.320 Arbitration 00:07:48.320 =========== 00:07:48.320 Arbitration Burst: no limit 00:07:48.320 00:07:48.320 Power Management 00:07:48.320 ================ 00:07:48.320 Number of Power States: 1 00:07:48.320 Current Power State: Power State #0 00:07:48.320 Power State #0: 00:07:48.320 Max Power: 25.00 W 00:07:48.320 Non-Operational State: Operational 00:07:48.320 Entry Latency: 16 microseconds 00:07:48.320 Exit Latency: 4 microseconds 00:07:48.320 Relative Read Throughput: 0 00:07:48.320 Relative Read Latency: 0 00:07:48.320 Relative Write Throughput: 0 00:07:48.320 Relative Write Latency: 0 00:07:48.320 Idle Power: Not Reported 00:07:48.320 Active Power: Not Reported 00:07:48.320 Non-Operational Permissive Mode: Not Supported 00:07:48.320 00:07:48.320 Health Information 00:07:48.320 ================== 00:07:48.320 Critical Warnings: 00:07:48.320 Available Spare Space: OK 00:07:48.320 Temperature: OK 00:07:48.320 Device Reliability: OK 00:07:48.320 Read Only: No 00:07:48.320 Volatile Memory Backup: OK 00:07:48.320 Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.320 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:48.320 Available Spare: 0% 00:07:48.320 Available Spare Threshold: 0% 00:07:48.320 Life Percentage Used: 0% 00:07:48.320 Data Units Read: 670 00:07:48.320 Data Units Written: 598 00:07:48.320 Host Read Commands: 36456 00:07:48.320 Host Write Commands: 36242 00:07:48.320 Controller Busy Time: 0 minutes 00:07:48.320 Power Cycles: 0 00:07:48.320 Power On Hours: 0 hours 00:07:48.320 Unsafe Shutdowns: 0 00:07:48.320 Unrecoverable Media Errors: 0 00:07:48.320 Lifetime Error Log Entries: 0 00:07:48.320 Warning Temperature Time: 0 minutes 00:07:48.320 Critical Temperature Time: 0 minutes 00:07:48.320 00:07:48.320 Number of Queues 00:07:48.320 ================ 00:07:48.320 Number of I/O Submission Queues: 64 00:07:48.320 Number of I/O Completion Queues: 64 00:07:48.320 00:07:48.320 ZNS Specific Controller Data 00:07:48.320 ============================ 00:07:48.320 Zone Append Size Limit: 0 00:07:48.320 00:07:48.320 00:07:48.320 Active Namespaces 00:07:48.320 ================= 00:07:48.320 Namespace ID:1 00:07:48.320 Error Recovery Timeout: Unlimited 00:07:48.320 Command Set Identifier: NVM (00h) 00:07:48.320 Deallocate: Supported 00:07:48.320 Deallocated/Unwritten Error: Supported 00:07:48.320 Deallocated Read Value: All 0x00 00:07:48.320 Deallocate in Write Zeroes: Not Supported 00:07:48.320 Deallocated Guard Field: 0xFFFF 00:07:48.320 Flush: Supported 00:07:48.320 Reservation: Not Supported 00:07:48.320 Metadata Transferred as: Separate Metadata Buffer 00:07:48.320 Namespace Sharing Capabilities: Private 00:07:48.320 Size (in LBAs): 1548666 (5GiB) 00:07:48.320 Capacity (in LBAs): 1548666 (5GiB) 00:07:48.320 Utilization (in LBAs): 1548666 (5GiB) 00:07:48.320 Thin Provisioning: Not Supported 00:07:48.320 Per-NS Atomic Units: No 00:07:48.320 Maximum Single Source Range Length: 128 00:07:48.320 Maximum Copy Length: 128 00:07:48.320 Maximum Source Range Count: 128 00:07:48.320 NGUID/EUI64 Never Reused: No 00:07:48.320 Namespace Write Protected: No 00:07:48.320 Number of LBA Formats: 8 00:07:48.320 Current LBA Format: LBA Format #07 00:07:48.320 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.320 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.320 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.320 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.320 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.320 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.320 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.320 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.320 00:07:48.320 NVM Specific Namespace Data 00:07:48.320 =========================== 00:07:48.320 Logical Block Storage Tag Mask: 0 00:07:48.320 Protection Information Capabilities: 00:07:48.320 16b Guard Protection Information Storage Tag Support: No 00:07:48.320 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.320 Storage Tag Check Read Support: No 00:07:48.320 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.320 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.320 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.320 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.320 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.320 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.320 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.320 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.582 16:54:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:48.582 16:54:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:48.582 ===================================================== 00:07:48.582 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:48.582 ===================================================== 00:07:48.582 Controller Capabilities/Features 00:07:48.582 ================================ 00:07:48.582 Vendor ID: 1b36 00:07:48.582 Subsystem Vendor ID: 1af4 00:07:48.582 Serial Number: 12341 00:07:48.582 Model Number: QEMU NVMe Ctrl 00:07:48.582 Firmware Version: 8.0.0 00:07:48.582 Recommended Arb Burst: 6 00:07:48.582 IEEE OUI Identifier: 00 54 52 00:07:48.582 Multi-path I/O 00:07:48.582 May have multiple subsystem ports: No 00:07:48.582 May have multiple controllers: No 00:07:48.582 Associated with SR-IOV VF: No 00:07:48.582 Max Data Transfer Size: 524288 00:07:48.582 Max Number of Namespaces: 256 00:07:48.582 Max Number of I/O Queues: 64 00:07:48.582 NVMe Specification Version (VS): 1.4 00:07:48.582 NVMe Specification Version (Identify): 1.4 00:07:48.582 Maximum Queue Entries: 2048 00:07:48.582 Contiguous Queues Required: Yes 00:07:48.582 Arbitration Mechanisms Supported 00:07:48.582 Weighted Round Robin: Not Supported 00:07:48.582 Vendor Specific: Not Supported 00:07:48.582 Reset Timeout: 7500 ms 00:07:48.582 Doorbell Stride: 4 bytes 00:07:48.582 NVM Subsystem Reset: Not Supported 00:07:48.582 Command Sets Supported 00:07:48.582 NVM Command Set: Supported 00:07:48.582 Boot Partition: Not Supported 00:07:48.582 Memory Page Size Minimum: 4096 bytes 00:07:48.582 Memory Page Size Maximum: 65536 bytes 00:07:48.583 Persistent Memory Region: Not Supported 00:07:48.583 Optional Asynchronous Events Supported 00:07:48.583 Namespace Attribute Notices: Supported 00:07:48.583 Firmware Activation Notices: Not Supported 00:07:48.583 ANA Change Notices: Not Supported 00:07:48.583 PLE Aggregate Log Change Notices: Not Supported 00:07:48.583 LBA Status Info Alert Notices: Not Supported 00:07:48.583 EGE Aggregate Log Change Notices: Not Supported 00:07:48.583 Normal NVM Subsystem Shutdown event: Not Supported 00:07:48.583 Zone Descriptor Change Notices: Not Supported 00:07:48.583 Discovery Log Change Notices: Not Supported 00:07:48.583 Controller Attributes 00:07:48.583 128-bit Host Identifier: Not Supported 00:07:48.583 Non-Operational Permissive Mode: Not Supported 00:07:48.583 NVM Sets: Not Supported 00:07:48.583 Read Recovery Levels: Not Supported 00:07:48.583 Endurance Groups: Not Supported 00:07:48.583 Predictable Latency Mode: Not Supported 00:07:48.583 Traffic Based Keep ALive: Not Supported 00:07:48.583 Namespace Granularity: Not Supported 00:07:48.583 SQ Associations: Not Supported 00:07:48.583 UUID List: Not Supported 00:07:48.583 Multi-Domain Subsystem: Not Supported 00:07:48.583 Fixed Capacity Management: Not Supported 00:07:48.583 Variable Capacity Management: Not Supported 00:07:48.583 Delete Endurance Group: Not Supported 00:07:48.583 Delete NVM Set: Not Supported 00:07:48.583 Extended LBA Formats Supported: Supported 00:07:48.583 Flexible Data Placement Supported: Not Supported 00:07:48.583 00:07:48.583 Controller Memory Buffer Support 00:07:48.583 ================================ 00:07:48.583 Supported: No 00:07:48.583 00:07:48.583 Persistent Memory Region Support 00:07:48.583 ================================ 00:07:48.583 Supported: No 00:07:48.583 00:07:48.583 Admin Command Set Attributes 00:07:48.583 ============================ 00:07:48.583 Security Send/Receive: Not Supported 00:07:48.583 Format NVM: Supported 00:07:48.583 Firmware Activate/Download: Not Supported 00:07:48.583 Namespace Management: Supported 00:07:48.583 Device Self-Test: Not Supported 00:07:48.583 Directives: Supported 00:07:48.583 NVMe-MI: Not Supported 00:07:48.583 Virtualization Management: Not Supported 00:07:48.583 Doorbell Buffer Config: Supported 00:07:48.583 Get LBA Status Capability: Not Supported 00:07:48.583 Command & Feature Lockdown Capability: Not Supported 00:07:48.583 Abort Command Limit: 4 00:07:48.583 Async Event Request Limit: 4 00:07:48.583 Number of Firmware Slots: N/A 00:07:48.583 Firmware Slot 1 Read-Only: N/A 00:07:48.583 Firmware Activation Without Reset: N/A 00:07:48.583 Multiple Update Detection Support: N/A 00:07:48.583 Firmware Update Granularity: No Information Provided 00:07:48.583 Per-Namespace SMART Log: Yes 00:07:48.583 Asymmetric Namespace Access Log Page: Not Supported 00:07:48.583 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:48.583 Command Effects Log Page: Supported 00:07:48.583 Get Log Page Extended Data: Supported 00:07:48.583 Telemetry Log Pages: Not Supported 00:07:48.583 Persistent Event Log Pages: Not Supported 00:07:48.583 Supported Log Pages Log Page: May Support 00:07:48.583 Commands Supported & Effects Log Page: Not Supported 00:07:48.583 Feature Identifiers & Effects Log Page:May Support 00:07:48.583 NVMe-MI Commands & Effects Log Page: May Support 00:07:48.583 Data Area 4 for Telemetry Log: Not Supported 00:07:48.583 Error Log Page Entries Supported: 1 00:07:48.583 Keep Alive: Not Supported 00:07:48.583 00:07:48.583 NVM Command Set Attributes 00:07:48.583 ========================== 00:07:48.583 Submission Queue Entry Size 00:07:48.583 Max: 64 00:07:48.583 Min: 64 00:07:48.583 Completion Queue Entry Size 00:07:48.583 Max: 16 00:07:48.583 Min: 16 00:07:48.583 Number of Namespaces: 256 00:07:48.583 Compare Command: Supported 00:07:48.583 Write Uncorrectable Command: Not Supported 00:07:48.583 Dataset Management Command: Supported 00:07:48.583 Write Zeroes Command: Supported 00:07:48.583 Set Features Save Field: Supported 00:07:48.583 Reservations: Not Supported 00:07:48.583 Timestamp: Supported 00:07:48.583 Copy: Supported 00:07:48.583 Volatile Write Cache: Present 00:07:48.583 Atomic Write Unit (Normal): 1 00:07:48.583 Atomic Write Unit (PFail): 1 00:07:48.583 Atomic Compare & Write Unit: 1 00:07:48.583 Fused Compare & Write: Not Supported 00:07:48.583 Scatter-Gather List 00:07:48.583 SGL Command Set: Supported 00:07:48.583 SGL Keyed: Not Supported 00:07:48.583 SGL Bit Bucket Descriptor: Not Supported 00:07:48.583 SGL Metadata Pointer: Not Supported 00:07:48.583 Oversized SGL: Not Supported 00:07:48.583 SGL Metadata Address: Not Supported 00:07:48.583 SGL Offset: Not Supported 00:07:48.583 Transport SGL Data Block: Not Supported 00:07:48.583 Replay Protected Memory Block: Not Supported 00:07:48.583 00:07:48.583 Firmware Slot Information 00:07:48.583 ========================= 00:07:48.583 Active slot: 1 00:07:48.583 Slot 1 Firmware Revision: 1.0 00:07:48.583 00:07:48.583 00:07:48.583 Commands Supported and Effects 00:07:48.583 ============================== 00:07:48.583 Admin Commands 00:07:48.583 -------------- 00:07:48.583 Delete I/O Submission Queue (00h): Supported 00:07:48.583 Create I/O Submission Queue (01h): Supported 00:07:48.583 Get Log Page (02h): Supported 00:07:48.583 Delete I/O Completion Queue (04h): Supported 00:07:48.583 Create I/O Completion Queue (05h): Supported 00:07:48.583 Identify (06h): Supported 00:07:48.583 Abort (08h): Supported 00:07:48.583 Set Features (09h): Supported 00:07:48.583 Get Features (0Ah): Supported 00:07:48.583 Asynchronous Event Request (0Ch): Supported 00:07:48.583 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:48.583 Directive Send (19h): Supported 00:07:48.583 Directive Receive (1Ah): Supported 00:07:48.583 Virtualization Management (1Ch): Supported 00:07:48.583 Doorbell Buffer Config (7Ch): Supported 00:07:48.583 Format NVM (80h): Supported LBA-Change 00:07:48.583 I/O Commands 00:07:48.583 ------------ 00:07:48.583 Flush (00h): Supported LBA-Change 00:07:48.583 Write (01h): Supported LBA-Change 00:07:48.583 Read (02h): Supported 00:07:48.583 Compare (05h): Supported 00:07:48.583 Write Zeroes (08h): Supported LBA-Change 00:07:48.583 Dataset Management (09h): Supported LBA-Change 00:07:48.583 Unknown (0Ch): Supported 00:07:48.583 Unknown (12h): Supported 00:07:48.583 Copy (19h): Supported LBA-Change 00:07:48.583 Unknown (1Dh): Supported LBA-Change 00:07:48.583 00:07:48.583 Error Log 00:07:48.583 ========= 00:07:48.583 00:07:48.583 Arbitration 00:07:48.583 =========== 00:07:48.583 Arbitration Burst: no limit 00:07:48.583 00:07:48.583 Power Management 00:07:48.583 ================ 00:07:48.583 Number of Power States: 1 00:07:48.583 Current Power State: Power State #0 00:07:48.583 Power State #0: 00:07:48.583 Max Power: 25.00 W 00:07:48.583 Non-Operational State: Operational 00:07:48.583 Entry Latency: 16 microseconds 00:07:48.583 Exit Latency: 4 microseconds 00:07:48.583 Relative Read Throughput: 0 00:07:48.583 Relative Read Latency: 0 00:07:48.583 Relative Write Throughput: 0 00:07:48.583 Relative Write Latency: 0 00:07:48.583 Idle Power: Not Reported 00:07:48.583 Active Power: Not Reported 00:07:48.583 Non-Operational Permissive Mode: Not Supported 00:07:48.583 00:07:48.583 Health Information 00:07:48.583 ================== 00:07:48.583 Critical Warnings: 00:07:48.583 Available Spare Space: OK 00:07:48.583 Temperature: OK 00:07:48.583 Device Reliability: OK 00:07:48.583 Read Only: No 00:07:48.583 Volatile Memory Backup: OK 00:07:48.583 Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.583 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:48.583 Available Spare: 0% 00:07:48.583 Available Spare Threshold: 0% 00:07:48.583 Life Percentage Used: 0% 00:07:48.583 Data Units Read: 1062 00:07:48.583 Data Units Written: 928 00:07:48.583 Host Read Commands: 55149 00:07:48.583 Host Write Commands: 53941 00:07:48.583 Controller Busy Time: 0 minutes 00:07:48.583 Power Cycles: 0 00:07:48.583 Power On Hours: 0 hours 00:07:48.583 Unsafe Shutdowns: 0 00:07:48.583 Unrecoverable Media Errors: 0 00:07:48.583 Lifetime Error Log Entries: 0 00:07:48.583 Warning Temperature Time: 0 minutes 00:07:48.583 Critical Temperature Time: 0 minutes 00:07:48.583 00:07:48.583 Number of Queues 00:07:48.583 ================ 00:07:48.583 Number of I/O Submission Queues: 64 00:07:48.583 Number of I/O Completion Queues: 64 00:07:48.583 00:07:48.583 ZNS Specific Controller Data 00:07:48.583 ============================ 00:07:48.583 Zone Append Size Limit: 0 00:07:48.583 00:07:48.583 00:07:48.583 Active Namespaces 00:07:48.583 ================= 00:07:48.583 Namespace ID:1 00:07:48.584 Error Recovery Timeout: Unlimited 00:07:48.584 Command Set Identifier: NVM (00h) 00:07:48.584 Deallocate: Supported 00:07:48.584 Deallocated/Unwritten Error: Supported 00:07:48.584 Deallocated Read Value: All 0x00 00:07:48.584 Deallocate in Write Zeroes: Not Supported 00:07:48.584 Deallocated Guard Field: 0xFFFF 00:07:48.584 Flush: Supported 00:07:48.584 Reservation: Not Supported 00:07:48.584 Namespace Sharing Capabilities: Private 00:07:48.584 Size (in LBAs): 1310720 (5GiB) 00:07:48.584 Capacity (in LBAs): 1310720 (5GiB) 00:07:48.584 Utilization (in LBAs): 1310720 (5GiB) 00:07:48.584 Thin Provisioning: Not Supported 00:07:48.584 Per-NS Atomic Units: No 00:07:48.584 Maximum Single Source Range Length: 128 00:07:48.584 Maximum Copy Length: 128 00:07:48.584 Maximum Source Range Count: 128 00:07:48.584 NGUID/EUI64 Never Reused: No 00:07:48.584 Namespace Write Protected: No 00:07:48.584 Number of LBA Formats: 8 00:07:48.584 Current LBA Format: LBA Format #04 00:07:48.584 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.584 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.584 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.584 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.584 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.584 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.584 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.584 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.584 00:07:48.584 NVM Specific Namespace Data 00:07:48.584 =========================== 00:07:48.584 Logical Block Storage Tag Mask: 0 00:07:48.584 Protection Information Capabilities: 00:07:48.584 16b Guard Protection Information Storage Tag Support: No 00:07:48.584 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.584 Storage Tag Check Read Support: No 00:07:48.584 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.584 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.584 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.584 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.584 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.584 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.584 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.584 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.584 16:54:22 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:48.584 16:54:22 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:48.846 ===================================================== 00:07:48.846 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:48.846 ===================================================== 00:07:48.846 Controller Capabilities/Features 00:07:48.846 ================================ 00:07:48.846 Vendor ID: 1b36 00:07:48.846 Subsystem Vendor ID: 1af4 00:07:48.846 Serial Number: 12342 00:07:48.846 Model Number: QEMU NVMe Ctrl 00:07:48.846 Firmware Version: 8.0.0 00:07:48.846 Recommended Arb Burst: 6 00:07:48.846 IEEE OUI Identifier: 00 54 52 00:07:48.846 Multi-path I/O 00:07:48.846 May have multiple subsystem ports: No 00:07:48.846 May have multiple controllers: No 00:07:48.846 Associated with SR-IOV VF: No 00:07:48.846 Max Data Transfer Size: 524288 00:07:48.846 Max Number of Namespaces: 256 00:07:48.846 Max Number of I/O Queues: 64 00:07:48.846 NVMe Specification Version (VS): 1.4 00:07:48.846 NVMe Specification Version (Identify): 1.4 00:07:48.846 Maximum Queue Entries: 2048 00:07:48.846 Contiguous Queues Required: Yes 00:07:48.846 Arbitration Mechanisms Supported 00:07:48.846 Weighted Round Robin: Not Supported 00:07:48.846 Vendor Specific: Not Supported 00:07:48.846 Reset Timeout: 7500 ms 00:07:48.846 Doorbell Stride: 4 bytes 00:07:48.846 NVM Subsystem Reset: Not Supported 00:07:48.846 Command Sets Supported 00:07:48.846 NVM Command Set: Supported 00:07:48.846 Boot Partition: Not Supported 00:07:48.846 Memory Page Size Minimum: 4096 bytes 00:07:48.846 Memory Page Size Maximum: 65536 bytes 00:07:48.846 Persistent Memory Region: Not Supported 00:07:48.846 Optional Asynchronous Events Supported 00:07:48.846 Namespace Attribute Notices: Supported 00:07:48.846 Firmware Activation Notices: Not Supported 00:07:48.846 ANA Change Notices: Not Supported 00:07:48.846 PLE Aggregate Log Change Notices: Not Supported 00:07:48.846 LBA Status Info Alert Notices: Not Supported 00:07:48.846 EGE Aggregate Log Change Notices: Not Supported 00:07:48.846 Normal NVM Subsystem Shutdown event: Not Supported 00:07:48.846 Zone Descriptor Change Notices: Not Supported 00:07:48.846 Discovery Log Change Notices: Not Supported 00:07:48.846 Controller Attributes 00:07:48.846 128-bit Host Identifier: Not Supported 00:07:48.846 Non-Operational Permissive Mode: Not Supported 00:07:48.846 NVM Sets: Not Supported 00:07:48.846 Read Recovery Levels: Not Supported 00:07:48.846 Endurance Groups: Not Supported 00:07:48.846 Predictable Latency Mode: Not Supported 00:07:48.846 Traffic Based Keep ALive: Not Supported 00:07:48.846 Namespace Granularity: Not Supported 00:07:48.846 SQ Associations: Not Supported 00:07:48.846 UUID List: Not Supported 00:07:48.846 Multi-Domain Subsystem: Not Supported 00:07:48.846 Fixed Capacity Management: Not Supported 00:07:48.846 Variable Capacity Management: Not Supported 00:07:48.846 Delete Endurance Group: Not Supported 00:07:48.846 Delete NVM Set: Not Supported 00:07:48.846 Extended LBA Formats Supported: Supported 00:07:48.846 Flexible Data Placement Supported: Not Supported 00:07:48.846 00:07:48.847 Controller Memory Buffer Support 00:07:48.847 ================================ 00:07:48.847 Supported: No 00:07:48.847 00:07:48.847 Persistent Memory Region Support 00:07:48.847 ================================ 00:07:48.847 Supported: No 00:07:48.847 00:07:48.847 Admin Command Set Attributes 00:07:48.847 ============================ 00:07:48.847 Security Send/Receive: Not Supported 00:07:48.847 Format NVM: Supported 00:07:48.847 Firmware Activate/Download: Not Supported 00:07:48.847 Namespace Management: Supported 00:07:48.847 Device Self-Test: Not Supported 00:07:48.847 Directives: Supported 00:07:48.847 NVMe-MI: Not Supported 00:07:48.847 Virtualization Management: Not Supported 00:07:48.847 Doorbell Buffer Config: Supported 00:07:48.847 Get LBA Status Capability: Not Supported 00:07:48.847 Command & Feature Lockdown Capability: Not Supported 00:07:48.847 Abort Command Limit: 4 00:07:48.847 Async Event Request Limit: 4 00:07:48.847 Number of Firmware Slots: N/A 00:07:48.847 Firmware Slot 1 Read-Only: N/A 00:07:48.847 Firmware Activation Without Reset: N/A 00:07:48.847 Multiple Update Detection Support: N/A 00:07:48.847 Firmware Update Granularity: No Information Provided 00:07:48.847 Per-Namespace SMART Log: Yes 00:07:48.847 Asymmetric Namespace Access Log Page: Not Supported 00:07:48.847 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:48.847 Command Effects Log Page: Supported 00:07:48.847 Get Log Page Extended Data: Supported 00:07:48.847 Telemetry Log Pages: Not Supported 00:07:48.847 Persistent Event Log Pages: Not Supported 00:07:48.847 Supported Log Pages Log Page: May Support 00:07:48.847 Commands Supported & Effects Log Page: Not Supported 00:07:48.847 Feature Identifiers & Effects Log Page:May Support 00:07:48.847 NVMe-MI Commands & Effects Log Page: May Support 00:07:48.847 Data Area 4 for Telemetry Log: Not Supported 00:07:48.847 Error Log Page Entries Supported: 1 00:07:48.847 Keep Alive: Not Supported 00:07:48.847 00:07:48.847 NVM Command Set Attributes 00:07:48.847 ========================== 00:07:48.847 Submission Queue Entry Size 00:07:48.847 Max: 64 00:07:48.847 Min: 64 00:07:48.847 Completion Queue Entry Size 00:07:48.847 Max: 16 00:07:48.847 Min: 16 00:07:48.847 Number of Namespaces: 256 00:07:48.847 Compare Command: Supported 00:07:48.847 Write Uncorrectable Command: Not Supported 00:07:48.847 Dataset Management Command: Supported 00:07:48.847 Write Zeroes Command: Supported 00:07:48.847 Set Features Save Field: Supported 00:07:48.847 Reservations: Not Supported 00:07:48.847 Timestamp: Supported 00:07:48.847 Copy: Supported 00:07:48.847 Volatile Write Cache: Present 00:07:48.847 Atomic Write Unit (Normal): 1 00:07:48.847 Atomic Write Unit (PFail): 1 00:07:48.847 Atomic Compare & Write Unit: 1 00:07:48.847 Fused Compare & Write: Not Supported 00:07:48.847 Scatter-Gather List 00:07:48.847 SGL Command Set: Supported 00:07:48.847 SGL Keyed: Not Supported 00:07:48.847 SGL Bit Bucket Descriptor: Not Supported 00:07:48.847 SGL Metadata Pointer: Not Supported 00:07:48.847 Oversized SGL: Not Supported 00:07:48.847 SGL Metadata Address: Not Supported 00:07:48.847 SGL Offset: Not Supported 00:07:48.847 Transport SGL Data Block: Not Supported 00:07:48.847 Replay Protected Memory Block: Not Supported 00:07:48.847 00:07:48.847 Firmware Slot Information 00:07:48.847 ========================= 00:07:48.847 Active slot: 1 00:07:48.847 Slot 1 Firmware Revision: 1.0 00:07:48.847 00:07:48.847 00:07:48.847 Commands Supported and Effects 00:07:48.847 ============================== 00:07:48.847 Admin Commands 00:07:48.847 -------------- 00:07:48.847 Delete I/O Submission Queue (00h): Supported 00:07:48.847 Create I/O Submission Queue (01h): Supported 00:07:48.847 Get Log Page (02h): Supported 00:07:48.847 Delete I/O Completion Queue (04h): Supported 00:07:48.847 Create I/O Completion Queue (05h): Supported 00:07:48.847 Identify (06h): Supported 00:07:48.847 Abort (08h): Supported 00:07:48.847 Set Features (09h): Supported 00:07:48.847 Get Features (0Ah): Supported 00:07:48.847 Asynchronous Event Request (0Ch): Supported 00:07:48.847 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:48.847 Directive Send (19h): Supported 00:07:48.847 Directive Receive (1Ah): Supported 00:07:48.847 Virtualization Management (1Ch): Supported 00:07:48.847 Doorbell Buffer Config (7Ch): Supported 00:07:48.847 Format NVM (80h): Supported LBA-Change 00:07:48.847 I/O Commands 00:07:48.847 ------------ 00:07:48.847 Flush (00h): Supported LBA-Change 00:07:48.847 Write (01h): Supported LBA-Change 00:07:48.847 Read (02h): Supported 00:07:48.847 Compare (05h): Supported 00:07:48.847 Write Zeroes (08h): Supported LBA-Change 00:07:48.847 Dataset Management (09h): Supported LBA-Change 00:07:48.847 Unknown (0Ch): Supported 00:07:48.847 Unknown (12h): Supported 00:07:48.847 Copy (19h): Supported LBA-Change 00:07:48.847 Unknown (1Dh): Supported LBA-Change 00:07:48.847 00:07:48.847 Error Log 00:07:48.847 ========= 00:07:48.847 00:07:48.847 Arbitration 00:07:48.847 =========== 00:07:48.847 Arbitration Burst: no limit 00:07:48.847 00:07:48.847 Power Management 00:07:48.847 ================ 00:07:48.847 Number of Power States: 1 00:07:48.847 Current Power State: Power State #0 00:07:48.847 Power State #0: 00:07:48.847 Max Power: 25.00 W 00:07:48.847 Non-Operational State: Operational 00:07:48.847 Entry Latency: 16 microseconds 00:07:48.847 Exit Latency: 4 microseconds 00:07:48.847 Relative Read Throughput: 0 00:07:48.847 Relative Read Latency: 0 00:07:48.847 Relative Write Throughput: 0 00:07:48.847 Relative Write Latency: 0 00:07:48.847 Idle Power: Not Reported 00:07:48.847 Active Power: Not Reported 00:07:48.847 Non-Operational Permissive Mode: Not Supported 00:07:48.847 00:07:48.847 Health Information 00:07:48.847 ================== 00:07:48.847 Critical Warnings: 00:07:48.847 Available Spare Space: OK 00:07:48.847 Temperature: OK 00:07:48.847 Device Reliability: OK 00:07:48.847 Read Only: No 00:07:48.847 Volatile Memory Backup: OK 00:07:48.847 Current Temperature: 323 Kelvin (50 Celsius) 00:07:48.847 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:48.847 Available Spare: 0% 00:07:48.847 Available Spare Threshold: 0% 00:07:48.847 Life Percentage Used: 0% 00:07:48.847 Data Units Read: 2204 00:07:48.847 Data Units Written: 1991 00:07:48.847 Host Read Commands: 111776 00:07:48.847 Host Write Commands: 110045 00:07:48.847 Controller Busy Time: 0 minutes 00:07:48.847 Power Cycles: 0 00:07:48.847 Power On Hours: 0 hours 00:07:48.847 Unsafe Shutdowns: 0 00:07:48.847 Unrecoverable Media Errors: 0 00:07:48.847 Lifetime Error Log Entries: 0 00:07:48.847 Warning Temperature Time: 0 minutes 00:07:48.847 Critical Temperature Time: 0 minutes 00:07:48.847 00:07:48.847 Number of Queues 00:07:48.847 ================ 00:07:48.847 Number of I/O Submission Queues: 64 00:07:48.847 Number of I/O Completion Queues: 64 00:07:48.847 00:07:48.847 ZNS Specific Controller Data 00:07:48.847 ============================ 00:07:48.847 Zone Append Size Limit: 0 00:07:48.847 00:07:48.847 00:07:48.847 Active Namespaces 00:07:48.847 ================= 00:07:48.847 Namespace ID:1 00:07:48.847 Error Recovery Timeout: Unlimited 00:07:48.847 Command Set Identifier: NVM (00h) 00:07:48.847 Deallocate: Supported 00:07:48.847 Deallocated/Unwritten Error: Supported 00:07:48.847 Deallocated Read Value: All 0x00 00:07:48.847 Deallocate in Write Zeroes: Not Supported 00:07:48.847 Deallocated Guard Field: 0xFFFF 00:07:48.847 Flush: Supported 00:07:48.847 Reservation: Not Supported 00:07:48.847 Namespace Sharing Capabilities: Private 00:07:48.847 Size (in LBAs): 1048576 (4GiB) 00:07:48.847 Capacity (in LBAs): 1048576 (4GiB) 00:07:48.847 Utilization (in LBAs): 1048576 (4GiB) 00:07:48.847 Thin Provisioning: Not Supported 00:07:48.847 Per-NS Atomic Units: No 00:07:48.847 Maximum Single Source Range Length: 128 00:07:48.847 Maximum Copy Length: 128 00:07:48.847 Maximum Source Range Count: 128 00:07:48.847 NGUID/EUI64 Never Reused: No 00:07:48.847 Namespace Write Protected: No 00:07:48.847 Number of LBA Formats: 8 00:07:48.847 Current LBA Format: LBA Format #04 00:07:48.847 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.847 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.847 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.847 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.847 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.847 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.847 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.847 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.847 00:07:48.847 NVM Specific Namespace Data 00:07:48.847 =========================== 00:07:48.847 Logical Block Storage Tag Mask: 0 00:07:48.848 Protection Information Capabilities: 00:07:48.848 16b Guard Protection Information Storage Tag Support: No 00:07:48.848 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.848 Storage Tag Check Read Support: No 00:07:48.848 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.848 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.848 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.848 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.848 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.848 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.848 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.848 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.848 Namespace ID:2 00:07:48.848 Error Recovery Timeout: Unlimited 00:07:48.848 Command Set Identifier: NVM (00h) 00:07:48.848 Deallocate: Supported 00:07:48.848 Deallocated/Unwritten Error: Supported 00:07:48.848 Deallocated Read Value: All 0x00 00:07:48.848 Deallocate in Write Zeroes: Not Supported 00:07:48.848 Deallocated Guard Field: 0xFFFF 00:07:48.848 Flush: Supported 00:07:48.848 Reservation: Not Supported 00:07:48.848 Namespace Sharing Capabilities: Private 00:07:48.848 Size (in LBAs): 1048576 (4GiB) 00:07:48.848 Capacity (in LBAs): 1048576 (4GiB) 00:07:48.848 Utilization (in LBAs): 1048576 (4GiB) 00:07:48.848 Thin Provisioning: Not Supported 00:07:48.848 Per-NS Atomic Units: No 00:07:48.848 Maximum Single Source Range Length: 128 00:07:48.848 Maximum Copy Length: 128 00:07:48.848 Maximum Source Range Count: 128 00:07:48.848 NGUID/EUI64 Never Reused: No 00:07:48.848 Namespace Write Protected: No 00:07:48.848 Number of LBA Formats: 8 00:07:48.848 Current LBA Format: LBA Format #04 00:07:48.848 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.848 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.848 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.848 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.848 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.848 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.848 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.848 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.848 00:07:48.848 NVM Specific Namespace Data 00:07:48.848 =========================== 00:07:48.848 Logical Block Storage Tag Mask: 0 00:07:48.848 Protection Information Capabilities: 00:07:48.848 16b Guard Protection Information Storage Tag Support: No 00:07:48.848 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.848 Storage Tag Check Read Support: No 00:07:48.848 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.848 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.848 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.848 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.848 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.848 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.848 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.848 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.848 Namespace ID:3 00:07:48.848 Error Recovery Timeout: Unlimited 00:07:48.848 Command Set Identifier: NVM (00h) 00:07:48.848 Deallocate: Supported 00:07:48.848 Deallocated/Unwritten Error: Supported 00:07:48.848 Deallocated Read Value: All 0x00 00:07:48.848 Deallocate in Write Zeroes: Not Supported 00:07:48.848 Deallocated Guard Field: 0xFFFF 00:07:48.848 Flush: Supported 00:07:48.848 Reservation: Not Supported 00:07:48.848 Namespace Sharing Capabilities: Private 00:07:48.848 Size (in LBAs): 1048576 (4GiB) 00:07:48.848 Capacity (in LBAs): 1048576 (4GiB) 00:07:48.848 Utilization (in LBAs): 1048576 (4GiB) 00:07:48.848 Thin Provisioning: Not Supported 00:07:48.848 Per-NS Atomic Units: No 00:07:48.848 Maximum Single Source Range Length: 128 00:07:48.848 Maximum Copy Length: 128 00:07:48.848 Maximum Source Range Count: 128 00:07:48.848 NGUID/EUI64 Never Reused: No 00:07:48.848 Namespace Write Protected: No 00:07:48.848 Number of LBA Formats: 8 00:07:48.848 Current LBA Format: LBA Format #04 00:07:48.848 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:48.848 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:48.848 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:48.848 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:48.848 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:48.848 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:48.848 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:48.848 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:48.848 00:07:48.848 NVM Specific Namespace Data 00:07:48.848 =========================== 00:07:48.848 Logical Block Storage Tag Mask: 0 00:07:48.848 Protection Information Capabilities: 00:07:48.848 16b Guard Protection Information Storage Tag Support: No 00:07:48.848 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:48.848 Storage Tag Check Read Support: No 00:07:48.848 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.848 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.848 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.848 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.848 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.848 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.848 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.848 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:48.848 16:54:23 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:48.848 16:54:23 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:49.111 ===================================================== 00:07:49.111 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:49.111 ===================================================== 00:07:49.111 Controller Capabilities/Features 00:07:49.111 ================================ 00:07:49.111 Vendor ID: 1b36 00:07:49.111 Subsystem Vendor ID: 1af4 00:07:49.111 Serial Number: 12343 00:07:49.111 Model Number: QEMU NVMe Ctrl 00:07:49.111 Firmware Version: 8.0.0 00:07:49.111 Recommended Arb Burst: 6 00:07:49.111 IEEE OUI Identifier: 00 54 52 00:07:49.111 Multi-path I/O 00:07:49.111 May have multiple subsystem ports: No 00:07:49.111 May have multiple controllers: Yes 00:07:49.111 Associated with SR-IOV VF: No 00:07:49.111 Max Data Transfer Size: 524288 00:07:49.111 Max Number of Namespaces: 256 00:07:49.111 Max Number of I/O Queues: 64 00:07:49.111 NVMe Specification Version (VS): 1.4 00:07:49.111 NVMe Specification Version (Identify): 1.4 00:07:49.111 Maximum Queue Entries: 2048 00:07:49.111 Contiguous Queues Required: Yes 00:07:49.111 Arbitration Mechanisms Supported 00:07:49.111 Weighted Round Robin: Not Supported 00:07:49.111 Vendor Specific: Not Supported 00:07:49.111 Reset Timeout: 7500 ms 00:07:49.111 Doorbell Stride: 4 bytes 00:07:49.111 NVM Subsystem Reset: Not Supported 00:07:49.111 Command Sets Supported 00:07:49.111 NVM Command Set: Supported 00:07:49.111 Boot Partition: Not Supported 00:07:49.112 Memory Page Size Minimum: 4096 bytes 00:07:49.112 Memory Page Size Maximum: 65536 bytes 00:07:49.112 Persistent Memory Region: Not Supported 00:07:49.112 Optional Asynchronous Events Supported 00:07:49.112 Namespace Attribute Notices: Supported 00:07:49.112 Firmware Activation Notices: Not Supported 00:07:49.112 ANA Change Notices: Not Supported 00:07:49.112 PLE Aggregate Log Change Notices: Not Supported 00:07:49.112 LBA Status Info Alert Notices: Not Supported 00:07:49.112 EGE Aggregate Log Change Notices: Not Supported 00:07:49.112 Normal NVM Subsystem Shutdown event: Not Supported 00:07:49.112 Zone Descriptor Change Notices: Not Supported 00:07:49.112 Discovery Log Change Notices: Not Supported 00:07:49.112 Controller Attributes 00:07:49.112 128-bit Host Identifier: Not Supported 00:07:49.112 Non-Operational Permissive Mode: Not Supported 00:07:49.112 NVM Sets: Not Supported 00:07:49.112 Read Recovery Levels: Not Supported 00:07:49.112 Endurance Groups: Supported 00:07:49.112 Predictable Latency Mode: Not Supported 00:07:49.112 Traffic Based Keep ALive: Not Supported 00:07:49.112 Namespace Granularity: Not Supported 00:07:49.112 SQ Associations: Not Supported 00:07:49.112 UUID List: Not Supported 00:07:49.112 Multi-Domain Subsystem: Not Supported 00:07:49.112 Fixed Capacity Management: Not Supported 00:07:49.112 Variable Capacity Management: Not Supported 00:07:49.112 Delete Endurance Group: Not Supported 00:07:49.112 Delete NVM Set: Not Supported 00:07:49.112 Extended LBA Formats Supported: Supported 00:07:49.112 Flexible Data Placement Supported: Supported 00:07:49.112 00:07:49.112 Controller Memory Buffer Support 00:07:49.112 ================================ 00:07:49.112 Supported: No 00:07:49.112 00:07:49.112 Persistent Memory Region Support 00:07:49.112 ================================ 00:07:49.112 Supported: No 00:07:49.112 00:07:49.112 Admin Command Set Attributes 00:07:49.112 ============================ 00:07:49.112 Security Send/Receive: Not Supported 00:07:49.112 Format NVM: Supported 00:07:49.112 Firmware Activate/Download: Not Supported 00:07:49.112 Namespace Management: Supported 00:07:49.112 Device Self-Test: Not Supported 00:07:49.112 Directives: Supported 00:07:49.112 NVMe-MI: Not Supported 00:07:49.112 Virtualization Management: Not Supported 00:07:49.112 Doorbell Buffer Config: Supported 00:07:49.112 Get LBA Status Capability: Not Supported 00:07:49.112 Command & Feature Lockdown Capability: Not Supported 00:07:49.112 Abort Command Limit: 4 00:07:49.112 Async Event Request Limit: 4 00:07:49.112 Number of Firmware Slots: N/A 00:07:49.112 Firmware Slot 1 Read-Only: N/A 00:07:49.112 Firmware Activation Without Reset: N/A 00:07:49.112 Multiple Update Detection Support: N/A 00:07:49.112 Firmware Update Granularity: No Information Provided 00:07:49.112 Per-Namespace SMART Log: Yes 00:07:49.112 Asymmetric Namespace Access Log Page: Not Supported 00:07:49.112 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:49.112 Command Effects Log Page: Supported 00:07:49.112 Get Log Page Extended Data: Supported 00:07:49.112 Telemetry Log Pages: Not Supported 00:07:49.112 Persistent Event Log Pages: Not Supported 00:07:49.112 Supported Log Pages Log Page: May Support 00:07:49.112 Commands Supported & Effects Log Page: Not Supported 00:07:49.112 Feature Identifiers & Effects Log Page:May Support 00:07:49.112 NVMe-MI Commands & Effects Log Page: May Support 00:07:49.112 Data Area 4 for Telemetry Log: Not Supported 00:07:49.112 Error Log Page Entries Supported: 1 00:07:49.112 Keep Alive: Not Supported 00:07:49.112 00:07:49.112 NVM Command Set Attributes 00:07:49.112 ========================== 00:07:49.112 Submission Queue Entry Size 00:07:49.112 Max: 64 00:07:49.112 Min: 64 00:07:49.112 Completion Queue Entry Size 00:07:49.112 Max: 16 00:07:49.112 Min: 16 00:07:49.112 Number of Namespaces: 256 00:07:49.112 Compare Command: Supported 00:07:49.112 Write Uncorrectable Command: Not Supported 00:07:49.112 Dataset Management Command: Supported 00:07:49.112 Write Zeroes Command: Supported 00:07:49.112 Set Features Save Field: Supported 00:07:49.112 Reservations: Not Supported 00:07:49.112 Timestamp: Supported 00:07:49.112 Copy: Supported 00:07:49.112 Volatile Write Cache: Present 00:07:49.112 Atomic Write Unit (Normal): 1 00:07:49.112 Atomic Write Unit (PFail): 1 00:07:49.112 Atomic Compare & Write Unit: 1 00:07:49.112 Fused Compare & Write: Not Supported 00:07:49.112 Scatter-Gather List 00:07:49.112 SGL Command Set: Supported 00:07:49.112 SGL Keyed: Not Supported 00:07:49.112 SGL Bit Bucket Descriptor: Not Supported 00:07:49.112 SGL Metadata Pointer: Not Supported 00:07:49.112 Oversized SGL: Not Supported 00:07:49.112 SGL Metadata Address: Not Supported 00:07:49.112 SGL Offset: Not Supported 00:07:49.112 Transport SGL Data Block: Not Supported 00:07:49.112 Replay Protected Memory Block: Not Supported 00:07:49.112 00:07:49.112 Firmware Slot Information 00:07:49.112 ========================= 00:07:49.112 Active slot: 1 00:07:49.112 Slot 1 Firmware Revision: 1.0 00:07:49.112 00:07:49.112 00:07:49.112 Commands Supported and Effects 00:07:49.112 ============================== 00:07:49.112 Admin Commands 00:07:49.112 -------------- 00:07:49.112 Delete I/O Submission Queue (00h): Supported 00:07:49.112 Create I/O Submission Queue (01h): Supported 00:07:49.112 Get Log Page (02h): Supported 00:07:49.112 Delete I/O Completion Queue (04h): Supported 00:07:49.112 Create I/O Completion Queue (05h): Supported 00:07:49.112 Identify (06h): Supported 00:07:49.112 Abort (08h): Supported 00:07:49.112 Set Features (09h): Supported 00:07:49.112 Get Features (0Ah): Supported 00:07:49.112 Asynchronous Event Request (0Ch): Supported 00:07:49.112 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:49.112 Directive Send (19h): Supported 00:07:49.112 Directive Receive (1Ah): Supported 00:07:49.112 Virtualization Management (1Ch): Supported 00:07:49.112 Doorbell Buffer Config (7Ch): Supported 00:07:49.112 Format NVM (80h): Supported LBA-Change 00:07:49.112 I/O Commands 00:07:49.112 ------------ 00:07:49.112 Flush (00h): Supported LBA-Change 00:07:49.112 Write (01h): Supported LBA-Change 00:07:49.112 Read (02h): Supported 00:07:49.112 Compare (05h): Supported 00:07:49.112 Write Zeroes (08h): Supported LBA-Change 00:07:49.112 Dataset Management (09h): Supported LBA-Change 00:07:49.112 Unknown (0Ch): Supported 00:07:49.112 Unknown (12h): Supported 00:07:49.112 Copy (19h): Supported LBA-Change 00:07:49.112 Unknown (1Dh): Supported LBA-Change 00:07:49.112 00:07:49.112 Error Log 00:07:49.112 ========= 00:07:49.112 00:07:49.112 Arbitration 00:07:49.112 =========== 00:07:49.112 Arbitration Burst: no limit 00:07:49.112 00:07:49.112 Power Management 00:07:49.112 ================ 00:07:49.112 Number of Power States: 1 00:07:49.112 Current Power State: Power State #0 00:07:49.112 Power State #0: 00:07:49.112 Max Power: 25.00 W 00:07:49.112 Non-Operational State: Operational 00:07:49.112 Entry Latency: 16 microseconds 00:07:49.112 Exit Latency: 4 microseconds 00:07:49.112 Relative Read Throughput: 0 00:07:49.112 Relative Read Latency: 0 00:07:49.112 Relative Write Throughput: 0 00:07:49.112 Relative Write Latency: 0 00:07:49.112 Idle Power: Not Reported 00:07:49.112 Active Power: Not Reported 00:07:49.112 Non-Operational Permissive Mode: Not Supported 00:07:49.112 00:07:49.112 Health Information 00:07:49.112 ================== 00:07:49.112 Critical Warnings: 00:07:49.112 Available Spare Space: OK 00:07:49.112 Temperature: OK 00:07:49.112 Device Reliability: OK 00:07:49.112 Read Only: No 00:07:49.112 Volatile Memory Backup: OK 00:07:49.112 Current Temperature: 323 Kelvin (50 Celsius) 00:07:49.112 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:49.112 Available Spare: 0% 00:07:49.112 Available Spare Threshold: 0% 00:07:49.112 Life Percentage Used: 0% 00:07:49.112 Data Units Read: 835 00:07:49.112 Data Units Written: 764 00:07:49.112 Host Read Commands: 38342 00:07:49.112 Host Write Commands: 37765 00:07:49.112 Controller Busy Time: 0 minutes 00:07:49.112 Power Cycles: 0 00:07:49.112 Power On Hours: 0 hours 00:07:49.112 Unsafe Shutdowns: 0 00:07:49.112 Unrecoverable Media Errors: 0 00:07:49.112 Lifetime Error Log Entries: 0 00:07:49.112 Warning Temperature Time: 0 minutes 00:07:49.112 Critical Temperature Time: 0 minutes 00:07:49.112 00:07:49.112 Number of Queues 00:07:49.112 ================ 00:07:49.112 Number of I/O Submission Queues: 64 00:07:49.112 Number of I/O Completion Queues: 64 00:07:49.112 00:07:49.112 ZNS Specific Controller Data 00:07:49.113 ============================ 00:07:49.113 Zone Append Size Limit: 0 00:07:49.113 00:07:49.113 00:07:49.113 Active Namespaces 00:07:49.113 ================= 00:07:49.113 Namespace ID:1 00:07:49.113 Error Recovery Timeout: Unlimited 00:07:49.113 Command Set Identifier: NVM (00h) 00:07:49.113 Deallocate: Supported 00:07:49.113 Deallocated/Unwritten Error: Supported 00:07:49.113 Deallocated Read Value: All 0x00 00:07:49.113 Deallocate in Write Zeroes: Not Supported 00:07:49.113 Deallocated Guard Field: 0xFFFF 00:07:49.113 Flush: Supported 00:07:49.113 Reservation: Not Supported 00:07:49.113 Namespace Sharing Capabilities: Multiple Controllers 00:07:49.113 Size (in LBAs): 262144 (1GiB) 00:07:49.113 Capacity (in LBAs): 262144 (1GiB) 00:07:49.113 Utilization (in LBAs): 262144 (1GiB) 00:07:49.113 Thin Provisioning: Not Supported 00:07:49.113 Per-NS Atomic Units: No 00:07:49.113 Maximum Single Source Range Length: 128 00:07:49.113 Maximum Copy Length: 128 00:07:49.113 Maximum Source Range Count: 128 00:07:49.113 NGUID/EUI64 Never Reused: No 00:07:49.113 Namespace Write Protected: No 00:07:49.113 Endurance group ID: 1 00:07:49.113 Number of LBA Formats: 8 00:07:49.113 Current LBA Format: LBA Format #04 00:07:49.113 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:49.113 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:49.113 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:49.113 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:49.113 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:49.113 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:49.113 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:49.113 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:49.113 00:07:49.113 Get Feature FDP: 00:07:49.113 ================ 00:07:49.113 Enabled: Yes 00:07:49.113 FDP configuration index: 0 00:07:49.113 00:07:49.113 FDP configurations log page 00:07:49.113 =========================== 00:07:49.113 Number of FDP configurations: 1 00:07:49.113 Version: 0 00:07:49.113 Size: 112 00:07:49.113 FDP Configuration Descriptor: 0 00:07:49.113 Descriptor Size: 96 00:07:49.113 Reclaim Group Identifier format: 2 00:07:49.113 FDP Volatile Write Cache: Not Present 00:07:49.113 FDP Configuration: Valid 00:07:49.113 Vendor Specific Size: 0 00:07:49.113 Number of Reclaim Groups: 2 00:07:49.113 Number of Recalim Unit Handles: 8 00:07:49.113 Max Placement Identifiers: 128 00:07:49.113 Number of Namespaces Suppprted: 256 00:07:49.113 Reclaim unit Nominal Size: 6000000 bytes 00:07:49.113 Estimated Reclaim Unit Time Limit: Not Reported 00:07:49.113 RUH Desc #000: RUH Type: Initially Isolated 00:07:49.113 RUH Desc #001: RUH Type: Initially Isolated 00:07:49.113 RUH Desc #002: RUH Type: Initially Isolated 00:07:49.113 RUH Desc #003: RUH Type: Initially Isolated 00:07:49.113 RUH Desc #004: RUH Type: Initially Isolated 00:07:49.113 RUH Desc #005: RUH Type: Initially Isolated 00:07:49.113 RUH Desc #006: RUH Type: Initially Isolated 00:07:49.113 RUH Desc #007: RUH Type: Initially Isolated 00:07:49.113 00:07:49.113 FDP reclaim unit handle usage log page 00:07:49.113 ====================================== 00:07:49.113 Number of Reclaim Unit Handles: 8 00:07:49.113 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:49.113 RUH Usage Desc #001: RUH Attributes: Unused 00:07:49.113 RUH Usage Desc #002: RUH Attributes: Unused 00:07:49.113 RUH Usage Desc #003: RUH Attributes: Unused 00:07:49.113 RUH Usage Desc #004: RUH Attributes: Unused 00:07:49.113 RUH Usage Desc #005: RUH Attributes: Unused 00:07:49.113 RUH Usage Desc #006: RUH Attributes: Unused 00:07:49.113 RUH Usage Desc #007: RUH Attributes: Unused 00:07:49.113 00:07:49.113 FDP statistics log page 00:07:49.113 ======================= 00:07:49.113 Host bytes with metadata written: 490119168 00:07:49.113 Media bytes with metadata written: 490172416 00:07:49.113 Media bytes erased: 0 00:07:49.113 00:07:49.113 FDP events log page 00:07:49.113 =================== 00:07:49.113 Number of FDP events: 0 00:07:49.113 00:07:49.113 NVM Specific Namespace Data 00:07:49.113 =========================== 00:07:49.113 Logical Block Storage Tag Mask: 0 00:07:49.113 Protection Information Capabilities: 00:07:49.113 16b Guard Protection Information Storage Tag Support: No 00:07:49.113 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:49.113 Storage Tag Check Read Support: No 00:07:49.113 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.113 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.113 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.113 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.113 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.113 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.113 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.113 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:49.113 00:07:49.113 real 0m1.278s 00:07:49.113 user 0m0.464s 00:07:49.113 sys 0m0.598s 00:07:49.113 16:54:23 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.113 ************************************ 00:07:49.113 END TEST nvme_identify 00:07:49.113 ************************************ 00:07:49.113 16:54:23 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:49.375 16:54:23 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:49.375 16:54:23 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.375 16:54:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.375 16:54:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:49.375 ************************************ 00:07:49.375 START TEST nvme_perf 00:07:49.375 ************************************ 00:07:49.375 16:54:23 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:49.375 16:54:23 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:50.763 Initializing NVMe Controllers 00:07:50.763 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:50.763 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:50.763 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:50.763 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:50.763 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:50.763 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:50.763 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:50.763 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:50.763 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:50.763 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:50.763 Initialization complete. Launching workers. 00:07:50.763 ======================================================== 00:07:50.763 Latency(us) 00:07:50.763 Device Information : IOPS MiB/s Average min max 00:07:50.763 PCIE (0000:00:13.0) NSID 1 from core 0: 8045.14 94.28 15941.24 12678.03 48762.40 00:07:50.763 PCIE (0000:00:10.0) NSID 1 from core 0: 8045.14 94.28 15914.30 12497.70 47531.58 00:07:50.763 PCIE (0000:00:11.0) NSID 1 from core 0: 8045.14 94.28 15888.88 12641.70 46306.75 00:07:50.763 PCIE (0000:00:12.0) NSID 1 from core 0: 8045.14 94.28 15860.90 11719.35 45860.73 00:07:50.763 PCIE (0000:00:12.0) NSID 2 from core 0: 8045.14 94.28 15833.60 11186.14 44203.10 00:07:50.763 PCIE (0000:00:12.0) NSID 3 from core 0: 8108.99 95.03 15682.19 10631.00 35347.07 00:07:50.763 ======================================================== 00:07:50.763 Total : 48334.70 566.42 15853.29 10631.00 48762.40 00:07:50.763 00:07:50.763 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:50.763 ================================================================================= 00:07:50.763 1.00000% : 13006.375us 00:07:50.763 10.00000% : 13712.148us 00:07:50.763 25.00000% : 14518.745us 00:07:50.763 50.00000% : 15526.991us 00:07:50.763 75.00000% : 16535.237us 00:07:50.763 90.00000% : 17644.308us 00:07:50.763 95.00000% : 18249.255us 00:07:50.763 98.00000% : 19156.677us 00:07:50.763 99.00000% : 38313.354us 00:07:50.763 99.50000% : 47790.868us 00:07:50.763 99.90000% : 48597.465us 00:07:50.763 99.99000% : 48799.114us 00:07:50.763 99.99900% : 48799.114us 00:07:50.763 99.99990% : 48799.114us 00:07:50.763 99.99999% : 48799.114us 00:07:50.763 00:07:50.763 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:50.763 ================================================================================= 00:07:50.763 1.00000% : 12804.726us 00:07:50.763 10.00000% : 13712.148us 00:07:50.763 25.00000% : 14518.745us 00:07:50.763 50.00000% : 15526.991us 00:07:50.763 75.00000% : 16636.062us 00:07:50.763 90.00000% : 17644.308us 00:07:50.763 95.00000% : 18148.431us 00:07:50.763 98.00000% : 19156.677us 00:07:50.763 99.00000% : 37305.108us 00:07:50.763 99.50000% : 46379.323us 00:07:50.763 99.90000% : 47387.569us 00:07:50.763 99.99000% : 47589.218us 00:07:50.763 99.99900% : 47589.218us 00:07:50.763 99.99990% : 47589.218us 00:07:50.763 99.99999% : 47589.218us 00:07:50.763 00:07:50.763 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:50.763 ================================================================================= 00:07:50.763 1.00000% : 13006.375us 00:07:50.763 10.00000% : 13712.148us 00:07:50.763 25.00000% : 14518.745us 00:07:50.763 50.00000% : 15426.166us 00:07:50.763 75.00000% : 16636.062us 00:07:50.763 90.00000% : 17644.308us 00:07:50.763 95.00000% : 18249.255us 00:07:50.763 98.00000% : 19156.677us 00:07:50.763 99.00000% : 35490.265us 00:07:50.763 99.50000% : 45371.077us 00:07:50.763 99.90000% : 46177.674us 00:07:50.763 99.99000% : 46379.323us 00:07:50.763 99.99900% : 46379.323us 00:07:50.763 99.99990% : 46379.323us 00:07:50.763 99.99999% : 46379.323us 00:07:50.763 00:07:50.763 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:50.763 ================================================================================= 00:07:50.763 1.00000% : 12754.314us 00:07:50.763 10.00000% : 13712.148us 00:07:50.763 25.00000% : 14518.745us 00:07:50.763 50.00000% : 15426.166us 00:07:50.763 75.00000% : 16535.237us 00:07:50.763 90.00000% : 17745.132us 00:07:50.763 95.00000% : 18350.080us 00:07:50.763 98.00000% : 19156.677us 00:07:50.763 99.00000% : 35490.265us 00:07:50.763 99.50000% : 44766.129us 00:07:50.763 99.90000% : 45774.375us 00:07:50.763 99.99000% : 45976.025us 00:07:50.763 99.99900% : 45976.025us 00:07:50.763 99.99990% : 45976.025us 00:07:50.763 99.99999% : 45976.025us 00:07:50.763 00:07:50.763 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:50.763 ================================================================================= 00:07:50.763 1.00000% : 12754.314us 00:07:50.763 10.00000% : 13712.148us 00:07:50.763 25.00000% : 14518.745us 00:07:50.763 50.00000% : 15426.166us 00:07:50.763 75.00000% : 16535.237us 00:07:50.763 90.00000% : 17644.308us 00:07:50.763 95.00000% : 18350.080us 00:07:50.763 98.00000% : 19257.502us 00:07:50.763 99.00000% : 34078.720us 00:07:50.763 99.50000% : 43152.935us 00:07:50.763 99.90000% : 44161.182us 00:07:50.763 99.99000% : 44362.831us 00:07:50.763 99.99900% : 44362.831us 00:07:50.763 99.99990% : 44362.831us 00:07:50.763 99.99999% : 44362.831us 00:07:50.763 00:07:50.763 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:50.763 ================================================================================= 00:07:50.763 1.00000% : 12754.314us 00:07:50.763 10.00000% : 13712.148us 00:07:50.763 25.00000% : 14518.745us 00:07:50.763 50.00000% : 15526.991us 00:07:50.763 75.00000% : 16434.412us 00:07:50.763 90.00000% : 17644.308us 00:07:50.763 95.00000% : 18249.255us 00:07:50.763 98.00000% : 19257.502us 00:07:50.763 99.00000% : 24500.382us 00:07:50.763 99.50000% : 33272.123us 00:07:50.763 99.90000% : 35288.615us 00:07:50.763 99.99000% : 35490.265us 00:07:50.763 99.99900% : 35490.265us 00:07:50.763 99.99990% : 35490.265us 00:07:50.763 99.99999% : 35490.265us 00:07:50.763 00:07:50.763 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:50.763 ============================================================================== 00:07:50.763 Range in us Cumulative IO count 00:07:50.763 12653.489 - 12703.902: 0.0248% ( 2) 00:07:50.763 12703.902 - 12754.314: 0.0496% ( 2) 00:07:50.763 12754.314 - 12804.726: 0.1116% ( 5) 00:07:50.763 12804.726 - 12855.138: 0.2852% ( 14) 00:07:50.763 12855.138 - 12905.551: 0.4960% ( 17) 00:07:50.763 12905.551 - 13006.375: 1.2277% ( 59) 00:07:50.763 13006.375 - 13107.200: 2.2321% ( 81) 00:07:50.763 13107.200 - 13208.025: 3.2242% ( 80) 00:07:50.763 13208.025 - 13308.849: 4.4023% ( 95) 00:07:50.763 13308.849 - 13409.674: 5.9648% ( 126) 00:07:50.763 13409.674 - 13510.498: 7.7381% ( 143) 00:07:50.763 13510.498 - 13611.323: 9.3998% ( 134) 00:07:50.763 13611.323 - 13712.148: 10.9499% ( 125) 00:07:50.763 13712.148 - 13812.972: 12.5000% ( 125) 00:07:50.763 13812.972 - 13913.797: 14.0377% ( 124) 00:07:50.763 13913.797 - 14014.622: 15.6622% ( 131) 00:07:50.763 14014.622 - 14115.446: 17.4975% ( 148) 00:07:50.763 14115.446 - 14216.271: 19.2460% ( 141) 00:07:50.763 14216.271 - 14317.095: 21.2054% ( 158) 00:07:50.763 14317.095 - 14417.920: 23.5243% ( 187) 00:07:50.763 14417.920 - 14518.745: 26.0169% ( 201) 00:07:50.763 14518.745 - 14619.569: 28.4722% ( 198) 00:07:50.763 14619.569 - 14720.394: 30.8656% ( 193) 00:07:50.763 14720.394 - 14821.218: 33.2961% ( 196) 00:07:50.763 14821.218 - 14922.043: 35.6647% ( 191) 00:07:50.763 14922.043 - 15022.868: 38.2068% ( 205) 00:07:50.763 15022.868 - 15123.692: 40.8606% ( 214) 00:07:50.763 15123.692 - 15224.517: 43.4524% ( 209) 00:07:50.763 15224.517 - 15325.342: 46.2054% ( 222) 00:07:50.763 15325.342 - 15426.166: 48.9087% ( 218) 00:07:50.763 15426.166 - 15526.991: 51.4137% ( 202) 00:07:50.763 15526.991 - 15627.815: 53.9062% ( 201) 00:07:50.763 15627.815 - 15728.640: 56.3988% ( 201) 00:07:50.763 15728.640 - 15829.465: 58.8046% ( 194) 00:07:50.763 15829.465 - 15930.289: 61.3219% ( 203) 00:07:50.763 15930.289 - 16031.114: 64.2237% ( 234) 00:07:50.763 16031.114 - 16131.938: 66.7535% ( 204) 00:07:50.763 16131.938 - 16232.763: 69.3328% ( 208) 00:07:50.763 16232.763 - 16333.588: 71.6394% ( 186) 00:07:50.763 16333.588 - 16434.412: 74.0823% ( 197) 00:07:50.763 16434.412 - 16535.237: 76.0541% ( 159) 00:07:50.763 16535.237 - 16636.062: 77.8274% ( 143) 00:07:50.763 16636.062 - 16736.886: 79.4023% ( 127) 00:07:50.763 16736.886 - 16837.711: 80.8284% ( 115) 00:07:50.763 16837.711 - 16938.535: 82.0933% ( 102) 00:07:50.763 16938.535 - 17039.360: 83.2341% ( 92) 00:07:50.763 17039.360 - 17140.185: 84.6354% ( 113) 00:07:50.764 17140.185 - 17241.009: 85.8631% ( 99) 00:07:50.764 17241.009 - 17341.834: 87.1280% ( 102) 00:07:50.764 17341.834 - 17442.658: 88.3185% ( 96) 00:07:50.764 17442.658 - 17543.483: 89.5957% ( 103) 00:07:50.764 17543.483 - 17644.308: 90.7738% ( 95) 00:07:50.764 17644.308 - 17745.132: 91.9147% ( 92) 00:07:50.764 17745.132 - 17845.957: 92.8819% ( 78) 00:07:50.764 17845.957 - 17946.782: 93.6508% ( 62) 00:07:50.764 17946.782 - 18047.606: 94.3948% ( 60) 00:07:50.764 18047.606 - 18148.431: 94.9529% ( 45) 00:07:50.764 18148.431 - 18249.255: 95.5233% ( 46) 00:07:50.764 18249.255 - 18350.080: 95.9201% ( 32) 00:07:50.764 18350.080 - 18450.905: 96.2302% ( 25) 00:07:50.764 18450.905 - 18551.729: 96.5650% ( 27) 00:07:50.764 18551.729 - 18652.554: 96.9246% ( 29) 00:07:50.764 18652.554 - 18753.378: 97.2346% ( 25) 00:07:50.764 18753.378 - 18854.203: 97.5198% ( 23) 00:07:50.764 18854.203 - 18955.028: 97.7183% ( 16) 00:07:50.764 18955.028 - 19055.852: 97.9043% ( 15) 00:07:50.764 19055.852 - 19156.677: 98.0655% ( 13) 00:07:50.764 19156.677 - 19257.502: 98.2143% ( 12) 00:07:50.764 19257.502 - 19358.326: 98.2763% ( 5) 00:07:50.764 19358.326 - 19459.151: 98.3507% ( 6) 00:07:50.764 19459.151 - 19559.975: 98.4127% ( 5) 00:07:50.764 36901.809 - 37103.458: 98.4623% ( 4) 00:07:50.764 37103.458 - 37305.108: 98.5615% ( 8) 00:07:50.764 37305.108 - 37506.757: 98.6483% ( 7) 00:07:50.764 37506.757 - 37708.406: 98.7351% ( 7) 00:07:50.764 37708.406 - 37910.055: 98.8219% ( 7) 00:07:50.764 37910.055 - 38111.705: 98.9211% ( 8) 00:07:50.764 38111.705 - 38313.354: 99.0079% ( 7) 00:07:50.764 38313.354 - 38515.003: 99.1071% ( 8) 00:07:50.764 38515.003 - 38716.652: 99.1939% ( 7) 00:07:50.764 38716.652 - 38918.302: 99.2063% ( 1) 00:07:50.764 46984.271 - 47185.920: 99.2560% ( 4) 00:07:50.764 47185.920 - 47387.569: 99.3552% ( 8) 00:07:50.764 47387.569 - 47589.218: 99.4420% ( 7) 00:07:50.764 47589.218 - 47790.868: 99.5412% ( 8) 00:07:50.764 47790.868 - 47992.517: 99.6404% ( 8) 00:07:50.764 47992.517 - 48194.166: 99.7272% ( 7) 00:07:50.764 48194.166 - 48395.815: 99.8388% ( 9) 00:07:50.764 48395.815 - 48597.465: 99.9256% ( 7) 00:07:50.764 48597.465 - 48799.114: 100.0000% ( 6) 00:07:50.764 00:07:50.764 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:50.764 ============================================================================== 00:07:50.764 Range in us Cumulative IO count 00:07:50.764 12451.840 - 12502.252: 0.0124% ( 1) 00:07:50.764 12502.252 - 12552.665: 0.0744% ( 5) 00:07:50.764 12552.665 - 12603.077: 0.1116% ( 3) 00:07:50.764 12603.077 - 12653.489: 0.2108% ( 8) 00:07:50.764 12653.489 - 12703.902: 0.3844% ( 14) 00:07:50.764 12703.902 - 12754.314: 0.6200% ( 19) 00:07:50.764 12754.314 - 12804.726: 1.0169% ( 32) 00:07:50.764 12804.726 - 12855.138: 1.4137% ( 32) 00:07:50.764 12855.138 - 12905.551: 1.9469% ( 43) 00:07:50.764 12905.551 - 13006.375: 2.7778% ( 67) 00:07:50.764 13006.375 - 13107.200: 3.7078% ( 75) 00:07:50.764 13107.200 - 13208.025: 4.8487% ( 92) 00:07:50.764 13208.025 - 13308.849: 5.9152% ( 86) 00:07:50.764 13308.849 - 13409.674: 7.1801% ( 102) 00:07:50.764 13409.674 - 13510.498: 8.3581% ( 95) 00:07:50.764 13510.498 - 13611.323: 9.5982% ( 100) 00:07:50.764 13611.323 - 13712.148: 11.1111% ( 122) 00:07:50.764 13712.148 - 13812.972: 12.4256% ( 106) 00:07:50.764 13812.972 - 13913.797: 13.8269% ( 113) 00:07:50.764 13913.797 - 14014.622: 15.6622% ( 148) 00:07:50.764 14014.622 - 14115.446: 17.6835% ( 163) 00:07:50.764 14115.446 - 14216.271: 19.8537% ( 175) 00:07:50.764 14216.271 - 14317.095: 21.6766% ( 147) 00:07:50.764 14317.095 - 14417.920: 24.1939% ( 203) 00:07:50.764 14417.920 - 14518.745: 26.5997% ( 194) 00:07:50.764 14518.745 - 14619.569: 28.9435% ( 189) 00:07:50.764 14619.569 - 14720.394: 30.9276% ( 160) 00:07:50.764 14720.394 - 14821.218: 33.5689% ( 213) 00:07:50.764 14821.218 - 14922.043: 35.8011% ( 180) 00:07:50.764 14922.043 - 15022.868: 38.5045% ( 218) 00:07:50.764 15022.868 - 15123.692: 40.9598% ( 198) 00:07:50.764 15123.692 - 15224.517: 43.6880% ( 220) 00:07:50.764 15224.517 - 15325.342: 46.7014% ( 243) 00:07:50.764 15325.342 - 15426.166: 49.3552% ( 214) 00:07:50.764 15426.166 - 15526.991: 52.1825% ( 228) 00:07:50.764 15526.991 - 15627.815: 54.7123% ( 204) 00:07:50.764 15627.815 - 15728.640: 57.4281% ( 219) 00:07:50.764 15728.640 - 15829.465: 59.6602% ( 180) 00:07:50.764 15829.465 - 15930.289: 62.0536% ( 193) 00:07:50.764 15930.289 - 16031.114: 64.2981% ( 181) 00:07:50.764 16031.114 - 16131.938: 66.5055% ( 178) 00:07:50.764 16131.938 - 16232.763: 68.6384% ( 172) 00:07:50.764 16232.763 - 16333.588: 70.6473% ( 162) 00:07:50.764 16333.588 - 16434.412: 72.6562% ( 162) 00:07:50.764 16434.412 - 16535.237: 74.5412% ( 152) 00:07:50.764 16535.237 - 16636.062: 76.1781% ( 132) 00:07:50.764 16636.062 - 16736.886: 77.6910% ( 122) 00:07:50.764 16736.886 - 16837.711: 79.1047% ( 114) 00:07:50.764 16837.711 - 16938.535: 80.5556% ( 117) 00:07:50.764 16938.535 - 17039.360: 81.9940% ( 116) 00:07:50.764 17039.360 - 17140.185: 83.4077% ( 114) 00:07:50.764 17140.185 - 17241.009: 84.9826% ( 127) 00:07:50.764 17241.009 - 17341.834: 86.5947% ( 130) 00:07:50.764 17341.834 - 17442.658: 88.0456% ( 117) 00:07:50.764 17442.658 - 17543.483: 89.4469% ( 113) 00:07:50.764 17543.483 - 17644.308: 90.7490% ( 105) 00:07:50.764 17644.308 - 17745.132: 91.7535% ( 81) 00:07:50.764 17745.132 - 17845.957: 92.8695% ( 90) 00:07:50.764 17845.957 - 17946.782: 93.7872% ( 74) 00:07:50.764 17946.782 - 18047.606: 94.6553% ( 70) 00:07:50.764 18047.606 - 18148.431: 95.2009% ( 44) 00:07:50.764 18148.431 - 18249.255: 95.8333% ( 51) 00:07:50.764 18249.255 - 18350.080: 96.4782% ( 52) 00:07:50.764 18350.080 - 18450.905: 96.7510% ( 22) 00:07:50.764 18450.905 - 18551.729: 97.0114% ( 21) 00:07:50.764 18551.729 - 18652.554: 97.2222% ( 17) 00:07:50.764 18652.554 - 18753.378: 97.3338% ( 9) 00:07:50.764 18753.378 - 18854.203: 97.5198% ( 15) 00:07:50.764 18854.203 - 18955.028: 97.7059% ( 15) 00:07:50.764 18955.028 - 19055.852: 97.8423% ( 11) 00:07:50.764 19055.852 - 19156.677: 98.0903% ( 20) 00:07:50.764 19156.677 - 19257.502: 98.2019% ( 9) 00:07:50.764 19257.502 - 19358.326: 98.2639% ( 5) 00:07:50.764 19358.326 - 19459.151: 98.3383% ( 6) 00:07:50.764 19459.151 - 19559.975: 98.4003% ( 5) 00:07:50.764 19559.975 - 19660.800: 98.4127% ( 1) 00:07:50.764 35490.265 - 35691.914: 98.4251% ( 1) 00:07:50.764 35691.914 - 35893.563: 98.4995% ( 6) 00:07:50.764 35893.563 - 36095.212: 98.5739% ( 6) 00:07:50.764 36095.212 - 36296.862: 98.6483% ( 6) 00:07:50.764 36296.862 - 36498.511: 98.7351% ( 7) 00:07:50.764 36498.511 - 36700.160: 98.8095% ( 6) 00:07:50.764 36700.160 - 36901.809: 98.9211% ( 9) 00:07:50.764 36901.809 - 37103.458: 98.9955% ( 6) 00:07:50.764 37103.458 - 37305.108: 99.0823% ( 7) 00:07:50.764 37305.108 - 37506.757: 99.1691% ( 7) 00:07:50.764 37506.757 - 37708.406: 99.2063% ( 3) 00:07:50.764 45572.726 - 45774.375: 99.2808% ( 6) 00:07:50.764 45774.375 - 45976.025: 99.3552% ( 6) 00:07:50.764 45976.025 - 46177.674: 99.4544% ( 8) 00:07:50.764 46177.674 - 46379.323: 99.5288% ( 6) 00:07:50.764 46379.323 - 46580.972: 99.6156% ( 7) 00:07:50.764 46580.972 - 46782.622: 99.6900% ( 6) 00:07:50.764 46782.622 - 46984.271: 99.7520% ( 5) 00:07:50.764 46984.271 - 47185.920: 99.8512% ( 8) 00:07:50.764 47185.920 - 47387.569: 99.9380% ( 7) 00:07:50.764 47387.569 - 47589.218: 100.0000% ( 5) 00:07:50.764 00:07:50.764 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:50.764 ============================================================================== 00:07:50.764 Range in us Cumulative IO count 00:07:50.764 12603.077 - 12653.489: 0.0124% ( 1) 00:07:50.764 12653.489 - 12703.902: 0.1240% ( 9) 00:07:50.764 12703.902 - 12754.314: 0.3720% ( 20) 00:07:50.764 12754.314 - 12804.726: 0.4588% ( 7) 00:07:50.764 12804.726 - 12855.138: 0.6572% ( 16) 00:07:50.764 12855.138 - 12905.551: 0.9549% ( 24) 00:07:50.764 12905.551 - 13006.375: 1.6493% ( 56) 00:07:50.764 13006.375 - 13107.200: 2.4926% ( 68) 00:07:50.764 13107.200 - 13208.025: 3.4722% ( 79) 00:07:50.764 13208.025 - 13308.849: 4.5759% ( 89) 00:07:50.764 13308.849 - 13409.674: 5.7416% ( 94) 00:07:50.764 13409.674 - 13510.498: 7.4033% ( 134) 00:07:50.764 13510.498 - 13611.323: 8.9906% ( 128) 00:07:50.764 13611.323 - 13712.148: 10.5655% ( 127) 00:07:50.764 13712.148 - 13812.972: 12.3140% ( 141) 00:07:50.764 13812.972 - 13913.797: 14.0377% ( 139) 00:07:50.764 13913.797 - 14014.622: 15.7862% ( 141) 00:07:50.764 14014.622 - 14115.446: 17.6835% ( 153) 00:07:50.764 14115.446 - 14216.271: 19.8413% ( 174) 00:07:50.764 14216.271 - 14317.095: 22.2718% ( 196) 00:07:50.764 14317.095 - 14417.920: 24.6528% ( 192) 00:07:50.764 14417.920 - 14518.745: 26.7609% ( 170) 00:07:50.764 14518.745 - 14619.569: 28.9931% ( 180) 00:07:50.764 14619.569 - 14720.394: 31.3616% ( 191) 00:07:50.764 14720.394 - 14821.218: 33.7550% ( 193) 00:07:50.764 14821.218 - 14922.043: 36.2475% ( 201) 00:07:50.764 14922.043 - 15022.868: 39.1245% ( 232) 00:07:50.764 15022.868 - 15123.692: 41.9767% ( 230) 00:07:50.764 15123.692 - 15224.517: 44.7297% ( 222) 00:07:50.764 15224.517 - 15325.342: 47.4702% ( 221) 00:07:50.764 15325.342 - 15426.166: 50.2604% ( 225) 00:07:50.764 15426.166 - 15526.991: 53.1126% ( 230) 00:07:50.764 15526.991 - 15627.815: 55.8160% ( 218) 00:07:50.764 15627.815 - 15728.640: 58.4449% ( 212) 00:07:50.764 15728.640 - 15829.465: 60.9747% ( 204) 00:07:50.764 15829.465 - 15930.289: 63.2937% ( 187) 00:07:50.764 15930.289 - 16031.114: 65.4266% ( 172) 00:07:50.764 16031.114 - 16131.938: 67.4975% ( 167) 00:07:50.764 16131.938 - 16232.763: 69.3452% ( 149) 00:07:50.764 16232.763 - 16333.588: 71.0689% ( 139) 00:07:50.765 16333.588 - 16434.412: 72.7679% ( 137) 00:07:50.765 16434.412 - 16535.237: 74.6280% ( 150) 00:07:50.765 16535.237 - 16636.062: 76.3145% ( 136) 00:07:50.765 16636.062 - 16736.886: 77.8274% ( 122) 00:07:50.765 16736.886 - 16837.711: 79.6875% ( 150) 00:07:50.765 16837.711 - 16938.535: 81.3740% ( 136) 00:07:50.765 16938.535 - 17039.360: 83.0481% ( 135) 00:07:50.765 17039.360 - 17140.185: 84.4618% ( 114) 00:07:50.765 17140.185 - 17241.009: 85.8011% ( 108) 00:07:50.765 17241.009 - 17341.834: 87.0040% ( 97) 00:07:50.765 17341.834 - 17442.658: 88.1324% ( 91) 00:07:50.765 17442.658 - 17543.483: 89.1493% ( 82) 00:07:50.765 17543.483 - 17644.308: 90.1414% ( 80) 00:07:50.765 17644.308 - 17745.132: 91.1706% ( 83) 00:07:50.765 17745.132 - 17845.957: 92.2123% ( 84) 00:07:50.765 17845.957 - 17946.782: 93.0060% ( 64) 00:07:50.765 17946.782 - 18047.606: 93.7748% ( 62) 00:07:50.765 18047.606 - 18148.431: 94.5933% ( 66) 00:07:50.765 18148.431 - 18249.255: 95.1761% ( 47) 00:07:50.765 18249.255 - 18350.080: 95.6101% ( 35) 00:07:50.765 18350.080 - 18450.905: 96.0689% ( 37) 00:07:50.765 18450.905 - 18551.729: 96.4534% ( 31) 00:07:50.765 18551.729 - 18652.554: 96.8502% ( 32) 00:07:50.765 18652.554 - 18753.378: 97.1850% ( 27) 00:07:50.765 18753.378 - 18854.203: 97.4454% ( 21) 00:07:50.765 18854.203 - 18955.028: 97.6935% ( 20) 00:07:50.765 18955.028 - 19055.852: 97.8671% ( 14) 00:07:50.765 19055.852 - 19156.677: 98.0655% ( 16) 00:07:50.765 19156.677 - 19257.502: 98.2515% ( 15) 00:07:50.765 19257.502 - 19358.326: 98.3507% ( 8) 00:07:50.765 19358.326 - 19459.151: 98.4127% ( 5) 00:07:50.765 34078.720 - 34280.369: 98.4871% ( 6) 00:07:50.765 34280.369 - 34482.018: 98.5739% ( 7) 00:07:50.765 34482.018 - 34683.668: 98.6607% ( 7) 00:07:50.765 34683.668 - 34885.317: 98.7475% ( 7) 00:07:50.765 34885.317 - 35086.966: 98.8343% ( 7) 00:07:50.765 35086.966 - 35288.615: 98.9211% ( 7) 00:07:50.765 35288.615 - 35490.265: 99.0079% ( 7) 00:07:50.765 35490.265 - 35691.914: 99.0947% ( 7) 00:07:50.765 35691.914 - 35893.563: 99.1939% ( 8) 00:07:50.765 35893.563 - 36095.212: 99.2063% ( 1) 00:07:50.765 44362.831 - 44564.480: 99.2312% ( 2) 00:07:50.765 44564.480 - 44766.129: 99.3180% ( 7) 00:07:50.765 44766.129 - 44967.778: 99.3924% ( 6) 00:07:50.765 44967.778 - 45169.428: 99.4792% ( 7) 00:07:50.765 45169.428 - 45371.077: 99.5660% ( 7) 00:07:50.765 45371.077 - 45572.726: 99.6528% ( 7) 00:07:50.765 45572.726 - 45774.375: 99.7520% ( 8) 00:07:50.765 45774.375 - 45976.025: 99.8388% ( 7) 00:07:50.765 45976.025 - 46177.674: 99.9380% ( 8) 00:07:50.765 46177.674 - 46379.323: 100.0000% ( 5) 00:07:50.765 00:07:50.765 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:50.765 ============================================================================== 00:07:50.765 Range in us Cumulative IO count 00:07:50.765 11695.655 - 11746.068: 0.0248% ( 2) 00:07:50.765 11746.068 - 11796.480: 0.0496% ( 2) 00:07:50.765 11796.480 - 11846.892: 0.0868% ( 3) 00:07:50.765 11846.892 - 11897.305: 0.1116% ( 2) 00:07:50.765 11897.305 - 11947.717: 0.1488% ( 3) 00:07:50.765 11947.717 - 11998.129: 0.1984% ( 4) 00:07:50.765 11998.129 - 12048.542: 0.2604% ( 5) 00:07:50.765 12048.542 - 12098.954: 0.3472% ( 7) 00:07:50.765 12098.954 - 12149.366: 0.4092% ( 5) 00:07:50.765 12149.366 - 12199.778: 0.4340% ( 2) 00:07:50.765 12199.778 - 12250.191: 0.4588% ( 2) 00:07:50.765 12250.191 - 12300.603: 0.4836% ( 2) 00:07:50.765 12300.603 - 12351.015: 0.5084% ( 2) 00:07:50.765 12351.015 - 12401.428: 0.5208% ( 1) 00:07:50.765 12451.840 - 12502.252: 0.5456% ( 2) 00:07:50.765 12502.252 - 12552.665: 0.5828% ( 3) 00:07:50.765 12552.665 - 12603.077: 0.6200% ( 3) 00:07:50.765 12603.077 - 12653.489: 0.6448% ( 2) 00:07:50.765 12653.489 - 12703.902: 0.7192% ( 6) 00:07:50.765 12703.902 - 12754.314: 1.0045% ( 23) 00:07:50.765 12754.314 - 12804.726: 1.2153% ( 17) 00:07:50.765 12804.726 - 12855.138: 1.5749% ( 29) 00:07:50.765 12855.138 - 12905.551: 1.8477% ( 22) 00:07:50.765 12905.551 - 13006.375: 2.2941% ( 36) 00:07:50.765 13006.375 - 13107.200: 3.0134% ( 58) 00:07:50.765 13107.200 - 13208.025: 3.8814% ( 70) 00:07:50.765 13208.025 - 13308.849: 4.8363% ( 77) 00:07:50.765 13308.849 - 13409.674: 5.9648% ( 91) 00:07:50.765 13409.674 - 13510.498: 7.4405% ( 119) 00:07:50.765 13510.498 - 13611.323: 9.0526% ( 130) 00:07:50.765 13611.323 - 13712.148: 10.7639% ( 138) 00:07:50.765 13712.148 - 13812.972: 12.2396% ( 119) 00:07:50.765 13812.972 - 13913.797: 13.9881% ( 141) 00:07:50.765 13913.797 - 14014.622: 15.8110% ( 147) 00:07:50.765 14014.622 - 14115.446: 18.1424% ( 188) 00:07:50.765 14115.446 - 14216.271: 20.3497% ( 178) 00:07:50.765 14216.271 - 14317.095: 22.8299% ( 200) 00:07:50.765 14317.095 - 14417.920: 24.9504% ( 171) 00:07:50.765 14417.920 - 14518.745: 27.1949% ( 181) 00:07:50.765 14518.745 - 14619.569: 29.6627% ( 199) 00:07:50.765 14619.569 - 14720.394: 32.0437% ( 192) 00:07:50.765 14720.394 - 14821.218: 34.6726% ( 212) 00:07:50.765 14821.218 - 14922.043: 37.3884% ( 219) 00:07:50.765 14922.043 - 15022.868: 40.2778% ( 233) 00:07:50.765 15022.868 - 15123.692: 43.2912% ( 243) 00:07:50.765 15123.692 - 15224.517: 46.0938% ( 226) 00:07:50.765 15224.517 - 15325.342: 48.5739% ( 200) 00:07:50.765 15325.342 - 15426.166: 51.0541% ( 200) 00:07:50.765 15426.166 - 15526.991: 53.7698% ( 219) 00:07:50.765 15526.991 - 15627.815: 56.2872% ( 203) 00:07:50.765 15627.815 - 15728.640: 58.6930% ( 194) 00:07:50.765 15728.640 - 15829.465: 61.3715% ( 216) 00:07:50.765 15829.465 - 15930.289: 64.0129% ( 213) 00:07:50.765 15930.289 - 16031.114: 66.2822% ( 183) 00:07:50.765 16031.114 - 16131.938: 68.2540% ( 159) 00:07:50.765 16131.938 - 16232.763: 70.0645% ( 146) 00:07:50.765 16232.763 - 16333.588: 71.8254% ( 142) 00:07:50.765 16333.588 - 16434.412: 73.5615% ( 140) 00:07:50.765 16434.412 - 16535.237: 75.2356% ( 135) 00:07:50.765 16535.237 - 16636.062: 76.6741% ( 116) 00:07:50.765 16636.062 - 16736.886: 78.3482% ( 135) 00:07:50.765 16736.886 - 16837.711: 80.0223% ( 135) 00:07:50.765 16837.711 - 16938.535: 81.5104% ( 120) 00:07:50.765 16938.535 - 17039.360: 82.9613% ( 117) 00:07:50.765 17039.360 - 17140.185: 84.3254% ( 110) 00:07:50.765 17140.185 - 17241.009: 85.5159% ( 96) 00:07:50.765 17241.009 - 17341.834: 86.6567% ( 92) 00:07:50.765 17341.834 - 17442.658: 87.9960% ( 108) 00:07:50.765 17442.658 - 17543.483: 89.0377% ( 84) 00:07:50.765 17543.483 - 17644.308: 89.9802% ( 76) 00:07:50.765 17644.308 - 17745.132: 90.9102% ( 75) 00:07:50.765 17745.132 - 17845.957: 91.8155% ( 73) 00:07:50.765 17845.957 - 17946.782: 92.6091% ( 64) 00:07:50.765 17946.782 - 18047.606: 93.3780% ( 62) 00:07:50.765 18047.606 - 18148.431: 93.9856% ( 49) 00:07:50.765 18148.431 - 18249.255: 94.5809% ( 48) 00:07:50.765 18249.255 - 18350.080: 95.1761% ( 48) 00:07:50.765 18350.080 - 18450.905: 95.7465% ( 46) 00:07:50.765 18450.905 - 18551.729: 96.2674% ( 42) 00:07:50.765 18551.729 - 18652.554: 96.8130% ( 44) 00:07:50.765 18652.554 - 18753.378: 97.1726% ( 29) 00:07:50.765 18753.378 - 18854.203: 97.4454% ( 22) 00:07:50.765 18854.203 - 18955.028: 97.7059% ( 21) 00:07:50.765 18955.028 - 19055.852: 97.9539% ( 20) 00:07:50.765 19055.852 - 19156.677: 98.1399% ( 15) 00:07:50.765 19156.677 - 19257.502: 98.2267% ( 7) 00:07:50.765 19257.502 - 19358.326: 98.3011% ( 6) 00:07:50.765 19358.326 - 19459.151: 98.3755% ( 6) 00:07:50.765 19459.151 - 19559.975: 98.4127% ( 3) 00:07:50.765 33877.071 - 34078.720: 98.4251% ( 1) 00:07:50.765 34078.720 - 34280.369: 98.4995% ( 6) 00:07:50.765 34280.369 - 34482.018: 98.5863% ( 7) 00:07:50.765 34482.018 - 34683.668: 98.6855% ( 8) 00:07:50.765 34683.668 - 34885.317: 98.7723% ( 7) 00:07:50.765 34885.317 - 35086.966: 98.8591% ( 7) 00:07:50.765 35086.966 - 35288.615: 98.9459% ( 7) 00:07:50.765 35288.615 - 35490.265: 99.0451% ( 8) 00:07:50.765 35490.265 - 35691.914: 99.1319% ( 7) 00:07:50.765 35691.914 - 35893.563: 99.2063% ( 6) 00:07:50.765 43757.883 - 43959.532: 99.2188% ( 1) 00:07:50.765 43959.532 - 44161.182: 99.3180% ( 8) 00:07:50.765 44161.182 - 44362.831: 99.4048% ( 7) 00:07:50.765 44362.831 - 44564.480: 99.4916% ( 7) 00:07:50.765 44564.480 - 44766.129: 99.5660% ( 6) 00:07:50.765 44766.129 - 44967.778: 99.6528% ( 7) 00:07:50.765 44967.778 - 45169.428: 99.7396% ( 7) 00:07:50.765 45169.428 - 45371.077: 99.7892% ( 4) 00:07:50.765 45371.077 - 45572.726: 99.8760% ( 7) 00:07:50.765 45572.726 - 45774.375: 99.9628% ( 7) 00:07:50.765 45774.375 - 45976.025: 100.0000% ( 3) 00:07:50.765 00:07:50.765 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:50.765 ============================================================================== 00:07:50.765 Range in us Cumulative IO count 00:07:50.765 11141.120 - 11191.532: 0.0124% ( 1) 00:07:50.765 11191.532 - 11241.945: 0.0372% ( 2) 00:07:50.765 11241.945 - 11292.357: 0.0868% ( 4) 00:07:50.765 11292.357 - 11342.769: 0.1612% ( 6) 00:07:50.765 11342.769 - 11393.182: 0.2108% ( 4) 00:07:50.765 11393.182 - 11443.594: 0.2356% ( 2) 00:07:50.765 11443.594 - 11494.006: 0.2728% ( 3) 00:07:50.765 11494.006 - 11544.418: 0.2852% ( 1) 00:07:50.765 11544.418 - 11594.831: 0.3224% ( 3) 00:07:50.765 11594.831 - 11645.243: 0.3596% ( 3) 00:07:50.765 11645.243 - 11695.655: 0.3844% ( 2) 00:07:50.765 11695.655 - 11746.068: 0.4092% ( 2) 00:07:50.765 11746.068 - 11796.480: 0.4464% ( 3) 00:07:50.765 11796.480 - 11846.892: 0.4712% ( 2) 00:07:50.765 11846.892 - 11897.305: 0.4960% ( 2) 00:07:50.765 11897.305 - 11947.717: 0.5208% ( 2) 00:07:50.765 11947.717 - 11998.129: 0.5580% ( 3) 00:07:50.765 11998.129 - 12048.542: 0.5952% ( 3) 00:07:50.766 12048.542 - 12098.954: 0.6200% ( 2) 00:07:50.766 12098.954 - 12149.366: 0.6572% ( 3) 00:07:50.766 12149.366 - 12199.778: 0.6944% ( 3) 00:07:50.766 12199.778 - 12250.191: 0.7068% ( 1) 00:07:50.766 12250.191 - 12300.603: 0.7440% ( 3) 00:07:50.766 12300.603 - 12351.015: 0.7688% ( 2) 00:07:50.766 12351.015 - 12401.428: 0.7937% ( 2) 00:07:50.766 12552.665 - 12603.077: 0.8433% ( 4) 00:07:50.766 12603.077 - 12653.489: 0.8929% ( 4) 00:07:50.766 12653.489 - 12703.902: 0.9921% ( 8) 00:07:50.766 12703.902 - 12754.314: 1.1037% ( 9) 00:07:50.766 12754.314 - 12804.726: 1.2649% ( 13) 00:07:50.766 12804.726 - 12855.138: 1.5129% ( 20) 00:07:50.766 12855.138 - 12905.551: 1.7485% ( 19) 00:07:50.766 12905.551 - 13006.375: 2.3313% ( 47) 00:07:50.766 13006.375 - 13107.200: 3.1870% ( 69) 00:07:50.766 13107.200 - 13208.025: 4.2659% ( 87) 00:07:50.766 13208.025 - 13308.849: 5.2455% ( 79) 00:07:50.766 13308.849 - 13409.674: 6.5228% ( 103) 00:07:50.766 13409.674 - 13510.498: 8.1845% ( 134) 00:07:50.766 13510.498 - 13611.323: 9.7842% ( 129) 00:07:50.766 13611.323 - 13712.148: 11.2475% ( 118) 00:07:50.766 13712.148 - 13812.972: 12.8596% ( 130) 00:07:50.766 13812.972 - 13913.797: 14.5337% ( 135) 00:07:50.766 13913.797 - 14014.622: 16.0838% ( 125) 00:07:50.766 14014.622 - 14115.446: 17.6215% ( 124) 00:07:50.766 14115.446 - 14216.271: 19.5064% ( 152) 00:07:50.766 14216.271 - 14317.095: 21.7262% ( 179) 00:07:50.766 14317.095 - 14417.920: 23.9831% ( 182) 00:07:50.766 14417.920 - 14518.745: 26.0913% ( 170) 00:07:50.766 14518.745 - 14619.569: 28.4102% ( 187) 00:07:50.766 14619.569 - 14720.394: 30.9028% ( 201) 00:07:50.766 14720.394 - 14821.218: 33.3333% ( 196) 00:07:50.766 14821.218 - 14922.043: 35.9995% ( 215) 00:07:50.766 14922.043 - 15022.868: 38.9757% ( 240) 00:07:50.766 15022.868 - 15123.692: 42.2619% ( 265) 00:07:50.766 15123.692 - 15224.517: 45.3001% ( 245) 00:07:50.766 15224.517 - 15325.342: 48.0779% ( 224) 00:07:50.766 15325.342 - 15426.166: 50.9053% ( 228) 00:07:50.766 15426.166 - 15526.991: 53.3110% ( 194) 00:07:50.766 15526.991 - 15627.815: 55.9028% ( 209) 00:07:50.766 15627.815 - 15728.640: 58.4449% ( 205) 00:07:50.766 15728.640 - 15829.465: 61.2599% ( 227) 00:07:50.766 15829.465 - 15930.289: 63.7277% ( 199) 00:07:50.766 15930.289 - 16031.114: 66.0714% ( 189) 00:07:50.766 16031.114 - 16131.938: 68.4524% ( 192) 00:07:50.766 16131.938 - 16232.763: 70.5977% ( 173) 00:07:50.766 16232.763 - 16333.588: 72.6811% ( 168) 00:07:50.766 16333.588 - 16434.412: 74.5784% ( 153) 00:07:50.766 16434.412 - 16535.237: 76.2649% ( 136) 00:07:50.766 16535.237 - 16636.062: 78.0382% ( 143) 00:07:50.766 16636.062 - 16736.886: 79.5759% ( 124) 00:07:50.766 16736.886 - 16837.711: 80.9648% ( 112) 00:07:50.766 16837.711 - 16938.535: 82.2669% ( 105) 00:07:50.766 16938.535 - 17039.360: 83.6682% ( 113) 00:07:50.766 17039.360 - 17140.185: 84.9950% ( 107) 00:07:50.766 17140.185 - 17241.009: 86.3095% ( 106) 00:07:50.766 17241.009 - 17341.834: 87.3636% ( 85) 00:07:50.766 17341.834 - 17442.658: 88.3557% ( 80) 00:07:50.766 17442.658 - 17543.483: 89.4345% ( 87) 00:07:50.766 17543.483 - 17644.308: 90.3026% ( 70) 00:07:50.766 17644.308 - 17745.132: 91.0590% ( 61) 00:07:50.766 17745.132 - 17845.957: 91.8899% ( 67) 00:07:50.766 17845.957 - 17946.782: 92.7331% ( 68) 00:07:50.766 17946.782 - 18047.606: 93.4152% ( 55) 00:07:50.766 18047.606 - 18148.431: 94.0724% ( 53) 00:07:50.766 18148.431 - 18249.255: 94.5809% ( 41) 00:07:50.766 18249.255 - 18350.080: 95.0893% ( 41) 00:07:50.766 18350.080 - 18450.905: 95.4737% ( 31) 00:07:50.766 18450.905 - 18551.729: 95.9077% ( 35) 00:07:50.766 18551.729 - 18652.554: 96.4038% ( 40) 00:07:50.766 18652.554 - 18753.378: 96.8874% ( 39) 00:07:50.766 18753.378 - 18854.203: 97.1974% ( 25) 00:07:50.766 18854.203 - 18955.028: 97.4702% ( 22) 00:07:50.766 18955.028 - 19055.852: 97.7555% ( 23) 00:07:50.766 19055.852 - 19156.677: 97.9787% ( 18) 00:07:50.766 19156.677 - 19257.502: 98.2143% ( 19) 00:07:50.766 19257.502 - 19358.326: 98.3507% ( 11) 00:07:50.766 19358.326 - 19459.151: 98.4127% ( 5) 00:07:50.766 32667.175 - 32868.825: 98.4871% ( 6) 00:07:50.766 32868.825 - 33070.474: 98.5863% ( 8) 00:07:50.766 33070.474 - 33272.123: 98.6731% ( 7) 00:07:50.766 33272.123 - 33473.772: 98.7723% ( 8) 00:07:50.766 33473.772 - 33675.422: 98.8591% ( 7) 00:07:50.766 33675.422 - 33877.071: 98.9459% ( 7) 00:07:50.766 33877.071 - 34078.720: 99.0327% ( 7) 00:07:50.766 34078.720 - 34280.369: 99.1195% ( 7) 00:07:50.766 34280.369 - 34482.018: 99.2063% ( 7) 00:07:50.766 42144.689 - 42346.338: 99.2188% ( 1) 00:07:50.766 42346.338 - 42547.988: 99.3056% ( 7) 00:07:50.766 42547.988 - 42749.637: 99.3924% ( 7) 00:07:50.766 42749.637 - 42951.286: 99.4544% ( 5) 00:07:50.766 42951.286 - 43152.935: 99.5536% ( 8) 00:07:50.766 43152.935 - 43354.585: 99.6280% ( 6) 00:07:50.766 43354.585 - 43556.234: 99.7148% ( 7) 00:07:50.766 43556.234 - 43757.883: 99.8016% ( 7) 00:07:50.766 43757.883 - 43959.532: 99.8884% ( 7) 00:07:50.766 43959.532 - 44161.182: 99.9752% ( 7) 00:07:50.766 44161.182 - 44362.831: 100.0000% ( 2) 00:07:50.766 00:07:50.766 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:50.766 ============================================================================== 00:07:50.766 Range in us Cumulative IO count 00:07:50.766 10586.585 - 10636.997: 0.0123% ( 1) 00:07:50.766 10636.997 - 10687.409: 0.0369% ( 2) 00:07:50.766 10687.409 - 10737.822: 0.0615% ( 2) 00:07:50.766 10737.822 - 10788.234: 0.1107% ( 4) 00:07:50.766 10788.234 - 10838.646: 0.1353% ( 2) 00:07:50.766 10838.646 - 10889.058: 0.1599% ( 2) 00:07:50.766 10889.058 - 10939.471: 0.1969% ( 3) 00:07:50.766 10939.471 - 10989.883: 0.2338% ( 3) 00:07:50.766 10989.883 - 11040.295: 0.2707% ( 3) 00:07:50.766 11040.295 - 11090.708: 0.2953% ( 2) 00:07:50.766 11090.708 - 11141.120: 0.3199% ( 2) 00:07:50.766 11141.120 - 11191.532: 0.3445% ( 2) 00:07:50.766 11191.532 - 11241.945: 0.3937% ( 4) 00:07:50.766 11241.945 - 11292.357: 0.4183% ( 2) 00:07:50.766 11292.357 - 11342.769: 0.4552% ( 3) 00:07:50.766 11342.769 - 11393.182: 0.4798% ( 2) 00:07:50.766 11393.182 - 11443.594: 0.5167% ( 3) 00:07:50.766 11443.594 - 11494.006: 0.5413% ( 2) 00:07:50.766 11494.006 - 11544.418: 0.5782% ( 3) 00:07:50.766 11544.418 - 11594.831: 0.6152% ( 3) 00:07:50.766 11594.831 - 11645.243: 0.6398% ( 2) 00:07:50.766 11645.243 - 11695.655: 0.6644% ( 2) 00:07:50.766 11695.655 - 11746.068: 0.7013% ( 3) 00:07:50.766 11746.068 - 11796.480: 0.7259% ( 2) 00:07:50.766 11796.480 - 11846.892: 0.7505% ( 2) 00:07:50.766 11846.892 - 11897.305: 0.7751% ( 2) 00:07:50.766 11897.305 - 11947.717: 0.7874% ( 1) 00:07:50.766 12401.428 - 12451.840: 0.8120% ( 2) 00:07:50.766 12451.840 - 12502.252: 0.8366% ( 2) 00:07:50.766 12502.252 - 12552.665: 0.8858% ( 4) 00:07:50.766 12552.665 - 12603.077: 0.8981% ( 1) 00:07:50.766 12603.077 - 12653.489: 0.9227% ( 2) 00:07:50.766 12653.489 - 12703.902: 0.9843% ( 5) 00:07:50.766 12703.902 - 12754.314: 1.1196% ( 11) 00:07:50.766 12754.314 - 12804.726: 1.2303% ( 9) 00:07:50.766 12804.726 - 12855.138: 1.4149% ( 15) 00:07:50.766 12855.138 - 12905.551: 1.6240% ( 17) 00:07:50.766 12905.551 - 13006.375: 2.0792% ( 37) 00:07:50.766 13006.375 - 13107.200: 2.8051% ( 59) 00:07:50.766 13107.200 - 13208.025: 3.9124% ( 90) 00:07:50.766 13208.025 - 13308.849: 5.4011% ( 121) 00:07:50.767 13308.849 - 13409.674: 6.8898% ( 121) 00:07:50.767 13409.674 - 13510.498: 8.4277% ( 125) 00:07:50.767 13510.498 - 13611.323: 9.8794% ( 118) 00:07:50.767 13611.323 - 13712.148: 11.1836% ( 106) 00:07:50.767 13712.148 - 13812.972: 12.8322% ( 134) 00:07:50.767 13812.972 - 13913.797: 14.3947% ( 127) 00:07:50.767 13913.797 - 14014.622: 15.9695% ( 128) 00:07:50.767 14014.622 - 14115.446: 17.6427% ( 136) 00:07:50.767 14115.446 - 14216.271: 19.3159% ( 136) 00:07:50.767 14216.271 - 14317.095: 21.3952% ( 169) 00:07:50.767 14317.095 - 14417.920: 23.4621% ( 168) 00:07:50.767 14417.920 - 14518.745: 25.6029% ( 174) 00:07:50.767 14518.745 - 14619.569: 28.1742% ( 209) 00:07:50.767 14619.569 - 14720.394: 30.4503% ( 185) 00:07:50.767 14720.394 - 14821.218: 33.1693% ( 221) 00:07:50.767 14821.218 - 14922.043: 35.5438% ( 193) 00:07:50.767 14922.043 - 15022.868: 37.9921% ( 199) 00:07:50.767 15022.868 - 15123.692: 40.7234% ( 222) 00:07:50.767 15123.692 - 15224.517: 43.5901% ( 233) 00:07:50.767 15224.517 - 15325.342: 46.4444% ( 232) 00:07:50.767 15325.342 - 15426.166: 49.2003% ( 224) 00:07:50.767 15426.166 - 15526.991: 51.6240% ( 197) 00:07:50.767 15526.991 - 15627.815: 54.6137% ( 243) 00:07:50.767 15627.815 - 15728.640: 57.5787% ( 241) 00:07:50.767 15728.640 - 15829.465: 60.7160% ( 255) 00:07:50.767 15829.465 - 15930.289: 63.5950% ( 234) 00:07:50.767 15930.289 - 16031.114: 66.3140% ( 221) 00:07:50.767 16031.114 - 16131.938: 68.7992% ( 202) 00:07:50.767 16131.938 - 16232.763: 71.1737% ( 193) 00:07:50.767 16232.763 - 16333.588: 73.2160% ( 166) 00:07:50.767 16333.588 - 16434.412: 75.4552% ( 182) 00:07:50.767 16434.412 - 16535.237: 77.3991% ( 158) 00:07:50.767 16535.237 - 16636.062: 78.9493% ( 126) 00:07:50.767 16636.062 - 16736.886: 80.2411% ( 105) 00:07:50.767 16736.886 - 16837.711: 81.6437% ( 114) 00:07:50.767 16837.711 - 16938.535: 82.8740% ( 100) 00:07:50.767 16938.535 - 17039.360: 84.0551% ( 96) 00:07:50.767 17039.360 - 17140.185: 85.0394% ( 80) 00:07:50.767 17140.185 - 17241.009: 86.1836% ( 93) 00:07:50.767 17241.009 - 17341.834: 87.3278% ( 93) 00:07:50.767 17341.834 - 17442.658: 88.4350% ( 90) 00:07:50.767 17442.658 - 17543.483: 89.5915% ( 94) 00:07:50.767 17543.483 - 17644.308: 90.7603% ( 95) 00:07:50.767 17644.308 - 17745.132: 91.8922% ( 92) 00:07:50.767 17745.132 - 17845.957: 92.7534% ( 70) 00:07:50.767 17845.957 - 17946.782: 93.3563% ( 49) 00:07:50.767 17946.782 - 18047.606: 93.9715% ( 50) 00:07:50.767 18047.606 - 18148.431: 94.6727% ( 57) 00:07:50.767 18148.431 - 18249.255: 95.2018% ( 43) 00:07:50.767 18249.255 - 18350.080: 95.5955% ( 32) 00:07:50.767 18350.080 - 18450.905: 95.9769% ( 31) 00:07:50.767 18450.905 - 18551.729: 96.3460% ( 30) 00:07:50.767 18551.729 - 18652.554: 96.7151% ( 30) 00:07:50.767 18652.554 - 18753.378: 97.0595% ( 28) 00:07:50.767 18753.378 - 18854.203: 97.2933% ( 19) 00:07:50.767 18854.203 - 18955.028: 97.5025% ( 17) 00:07:50.767 18955.028 - 19055.852: 97.7239% ( 18) 00:07:50.767 19055.852 - 19156.677: 97.9208% ( 16) 00:07:50.767 19156.677 - 19257.502: 98.0684% ( 12) 00:07:50.767 19257.502 - 19358.326: 98.1914% ( 10) 00:07:50.767 19358.326 - 19459.151: 98.2653% ( 6) 00:07:50.767 19459.151 - 19559.975: 98.3391% ( 6) 00:07:50.767 19559.975 - 19660.800: 98.4006% ( 5) 00:07:50.767 19660.800 - 19761.625: 98.4252% ( 2) 00:07:50.767 23189.662 - 23290.486: 98.4621% ( 3) 00:07:50.767 23290.486 - 23391.311: 98.5113% ( 4) 00:07:50.767 23391.311 - 23492.135: 98.5605% ( 4) 00:07:50.767 23492.135 - 23592.960: 98.5974% ( 3) 00:07:50.767 23592.960 - 23693.785: 98.6467% ( 4) 00:07:50.767 23693.785 - 23794.609: 98.6959% ( 4) 00:07:50.767 23794.609 - 23895.434: 98.7451% ( 4) 00:07:50.767 23895.434 - 23996.258: 98.7820% ( 3) 00:07:50.767 23996.258 - 24097.083: 98.8312% ( 4) 00:07:50.767 24097.083 - 24197.908: 98.8681% ( 3) 00:07:50.767 24197.908 - 24298.732: 98.9173% ( 4) 00:07:50.767 24298.732 - 24399.557: 98.9665% ( 4) 00:07:50.767 24399.557 - 24500.382: 99.0157% ( 4) 00:07:50.767 24500.382 - 24601.206: 99.0527% ( 3) 00:07:50.767 24601.206 - 24702.031: 99.1019% ( 4) 00:07:50.767 24702.031 - 24802.855: 99.1511% ( 4) 00:07:50.767 24802.855 - 24903.680: 99.1880% ( 3) 00:07:50.767 24903.680 - 25004.505: 99.2126% ( 2) 00:07:50.767 32465.526 - 32667.175: 99.2495% ( 3) 00:07:50.767 32667.175 - 32868.825: 99.3479% ( 8) 00:07:50.767 32868.825 - 33070.474: 99.4341% ( 7) 00:07:50.767 33070.474 - 33272.123: 99.5079% ( 6) 00:07:50.767 34078.720 - 34280.369: 99.5202% ( 1) 00:07:50.767 34280.369 - 34482.018: 99.6063% ( 7) 00:07:50.767 34482.018 - 34683.668: 99.6924% ( 7) 00:07:50.767 34683.668 - 34885.317: 99.7908% ( 8) 00:07:50.767 34885.317 - 35086.966: 99.8770% ( 7) 00:07:50.767 35086.966 - 35288.615: 99.9754% ( 8) 00:07:50.767 35288.615 - 35490.265: 100.0000% ( 2) 00:07:50.767 00:07:50.767 16:54:24 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:51.706 Initializing NVMe Controllers 00:07:51.706 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:51.706 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:51.706 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:51.706 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:51.706 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:51.706 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:51.706 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:51.706 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:51.706 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:51.706 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:51.706 Initialization complete. Launching workers. 00:07:51.706 ======================================================== 00:07:51.706 Latency(us) 00:07:51.706 Device Information : IOPS MiB/s Average min max 00:07:51.706 PCIE (0000:00:13.0) NSID 1 from core 0: 8683.87 101.76 14763.45 9797.83 42540.02 00:07:51.706 PCIE (0000:00:10.0) NSID 1 from core 0: 8683.87 101.76 14735.88 10202.17 41589.11 00:07:51.706 PCIE (0000:00:11.0) NSID 1 from core 0: 8683.87 101.76 14706.91 10017.67 39661.75 00:07:51.707 PCIE (0000:00:12.0) NSID 1 from core 0: 8683.87 101.76 14680.29 9704.61 39199.54 00:07:51.707 PCIE (0000:00:12.0) NSID 2 from core 0: 8683.87 101.76 14653.81 9787.01 37993.10 00:07:51.707 PCIE (0000:00:12.0) NSID 3 from core 0: 8747.72 102.51 14520.13 9622.08 28487.79 00:07:51.707 ======================================================== 00:07:51.707 Total : 52167.08 611.33 14676.55 9622.08 42540.02 00:07:51.707 00:07:51.707 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:51.707 ================================================================================= 00:07:51.707 1.00000% : 10384.935us 00:07:51.707 10.00000% : 11695.655us 00:07:51.707 25.00000% : 13006.375us 00:07:51.707 50.00000% : 14619.569us 00:07:51.707 75.00000% : 15829.465us 00:07:51.707 90.00000% : 16736.886us 00:07:51.707 95.00000% : 18350.080us 00:07:51.707 98.00000% : 20870.695us 00:07:51.707 99.00000% : 33473.772us 00:07:51.707 99.50000% : 40531.495us 00:07:51.707 99.90000% : 42346.338us 00:07:51.707 99.99000% : 42547.988us 00:07:51.707 99.99900% : 42547.988us 00:07:51.707 99.99990% : 42547.988us 00:07:51.707 99.99999% : 42547.988us 00:07:51.707 00:07:51.707 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:51.707 ================================================================================= 00:07:51.707 1.00000% : 10586.585us 00:07:51.707 10.00000% : 11645.243us 00:07:51.707 25.00000% : 13006.375us 00:07:51.707 50.00000% : 14619.569us 00:07:51.707 75.00000% : 15829.465us 00:07:51.707 90.00000% : 17039.360us 00:07:51.707 95.00000% : 18047.606us 00:07:51.707 98.00000% : 20467.397us 00:07:51.707 99.00000% : 31053.982us 00:07:51.707 99.50000% : 40329.846us 00:07:51.707 99.90000% : 41539.742us 00:07:51.707 99.99000% : 41741.391us 00:07:51.707 99.99900% : 41741.391us 00:07:51.707 99.99990% : 41741.391us 00:07:51.707 99.99999% : 41741.391us 00:07:51.707 00:07:51.707 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:51.707 ================================================================================= 00:07:51.707 1.00000% : 10788.234us 00:07:51.707 10.00000% : 11746.068us 00:07:51.707 25.00000% : 12804.726us 00:07:51.707 50.00000% : 14720.394us 00:07:51.707 75.00000% : 15829.465us 00:07:51.707 90.00000% : 16938.535us 00:07:51.707 95.00000% : 18249.255us 00:07:51.707 98.00000% : 20568.222us 00:07:51.707 99.00000% : 29440.788us 00:07:51.707 99.50000% : 38515.003us 00:07:51.707 99.90000% : 39523.249us 00:07:51.707 99.99000% : 39724.898us 00:07:51.707 99.99900% : 39724.898us 00:07:51.707 99.99990% : 39724.898us 00:07:51.707 99.99999% : 39724.898us 00:07:51.707 00:07:51.707 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:51.707 ================================================================================= 00:07:51.707 1.00000% : 10384.935us 00:07:51.707 10.00000% : 11746.068us 00:07:51.707 25.00000% : 12754.314us 00:07:51.707 50.00000% : 14720.394us 00:07:51.707 75.00000% : 15829.465us 00:07:51.707 90.00000% : 16938.535us 00:07:51.707 95.00000% : 18148.431us 00:07:51.707 98.00000% : 20467.397us 00:07:51.707 99.00000% : 28835.840us 00:07:51.707 99.50000% : 38111.705us 00:07:51.707 99.90000% : 39119.951us 00:07:51.707 99.99000% : 39321.600us 00:07:51.707 99.99900% : 39321.600us 00:07:51.707 99.99990% : 39321.600us 00:07:51.707 99.99999% : 39321.600us 00:07:51.707 00:07:51.707 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:51.707 ================================================================================= 00:07:51.707 1.00000% : 10233.698us 00:07:51.707 10.00000% : 11746.068us 00:07:51.707 25.00000% : 12804.726us 00:07:51.707 50.00000% : 14619.569us 00:07:51.707 75.00000% : 15930.289us 00:07:51.707 90.00000% : 16736.886us 00:07:51.707 95.00000% : 18450.905us 00:07:51.707 98.00000% : 20366.572us 00:07:51.707 99.00000% : 27827.594us 00:07:51.707 99.50000% : 36901.809us 00:07:51.707 99.90000% : 37910.055us 00:07:51.707 99.99000% : 38111.705us 00:07:51.707 99.99900% : 38111.705us 00:07:51.707 99.99990% : 38111.705us 00:07:51.707 99.99999% : 38111.705us 00:07:51.707 00:07:51.707 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:51.707 ================================================================================= 00:07:51.707 1.00000% : 10284.111us 00:07:51.707 10.00000% : 11645.243us 00:07:51.707 25.00000% : 12804.726us 00:07:51.707 50.00000% : 14720.394us 00:07:51.707 75.00000% : 15829.465us 00:07:51.707 90.00000% : 16736.886us 00:07:51.707 95.00000% : 18148.431us 00:07:51.707 98.00000% : 20164.923us 00:07:51.707 99.00000% : 20769.871us 00:07:51.707 99.50000% : 27424.295us 00:07:51.707 99.90000% : 28432.542us 00:07:51.707 99.99000% : 28634.191us 00:07:51.707 99.99900% : 28634.191us 00:07:51.707 99.99990% : 28634.191us 00:07:51.707 99.99999% : 28634.191us 00:07:51.707 00:07:51.707 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:51.707 ============================================================================== 00:07:51.707 Range in us Cumulative IO count 00:07:51.707 9779.988 - 9830.400: 0.0230% ( 2) 00:07:51.707 9830.400 - 9880.812: 0.0689% ( 4) 00:07:51.707 9880.812 - 9931.225: 0.1264% ( 5) 00:07:51.707 9931.225 - 9981.637: 0.1723% ( 4) 00:07:51.707 9981.637 - 10032.049: 0.2183% ( 4) 00:07:51.707 10032.049 - 10082.462: 0.2528% ( 3) 00:07:51.707 10082.462 - 10132.874: 0.3217% ( 6) 00:07:51.707 10132.874 - 10183.286: 0.4251% ( 9) 00:07:51.707 10183.286 - 10233.698: 0.6434% ( 19) 00:07:51.707 10233.698 - 10284.111: 0.7583% ( 10) 00:07:51.707 10284.111 - 10334.523: 0.8847% ( 11) 00:07:51.707 10334.523 - 10384.935: 1.0685% ( 16) 00:07:51.707 10384.935 - 10435.348: 1.2063% ( 12) 00:07:51.707 10435.348 - 10485.760: 1.2983% ( 8) 00:07:51.707 10485.760 - 10536.172: 1.3557% ( 5) 00:07:51.707 10536.172 - 10586.585: 1.4017% ( 4) 00:07:51.707 10586.585 - 10636.997: 1.4591% ( 5) 00:07:51.707 10636.997 - 10687.409: 1.5165% ( 5) 00:07:51.707 10687.409 - 10737.822: 1.5855% ( 6) 00:07:51.707 10737.822 - 10788.234: 1.6774% ( 8) 00:07:51.707 10788.234 - 10838.646: 1.8038% ( 11) 00:07:51.707 10838.646 - 10889.058: 2.1714% ( 32) 00:07:51.707 10889.058 - 10939.471: 2.4012% ( 20) 00:07:51.707 10939.471 - 10989.883: 2.6425% ( 21) 00:07:51.707 10989.883 - 11040.295: 2.8148% ( 15) 00:07:51.707 11040.295 - 11090.708: 3.0216% ( 18) 00:07:51.707 11090.708 - 11141.120: 3.2973% ( 24) 00:07:51.707 11141.120 - 11191.532: 3.6420% ( 30) 00:07:51.707 11191.532 - 11241.945: 4.0786% ( 38) 00:07:51.707 11241.945 - 11292.357: 4.6645% ( 51) 00:07:51.707 11292.357 - 11342.769: 5.2160% ( 48) 00:07:51.707 11342.769 - 11393.182: 5.7790% ( 49) 00:07:51.707 11393.182 - 11443.594: 6.3304% ( 48) 00:07:51.707 11443.594 - 11494.006: 6.9393% ( 53) 00:07:51.707 11494.006 - 11544.418: 7.5138% ( 50) 00:07:51.707 11544.418 - 11594.831: 8.2606% ( 65) 00:07:51.707 11594.831 - 11645.243: 9.1108% ( 74) 00:07:51.707 11645.243 - 11695.655: 10.0414% ( 81) 00:07:51.707 11695.655 - 11746.068: 10.7077% ( 58) 00:07:51.707 11746.068 - 11796.480: 11.3281% ( 54) 00:07:51.707 11796.480 - 11846.892: 12.0749% ( 65) 00:07:51.707 11846.892 - 11897.305: 12.6953% ( 54) 00:07:51.707 11897.305 - 11947.717: 13.3157% ( 54) 00:07:51.707 11947.717 - 11998.129: 14.0510% ( 64) 00:07:51.707 11998.129 - 12048.542: 14.7174% ( 58) 00:07:51.707 12048.542 - 12098.954: 15.1654% ( 39) 00:07:51.707 12098.954 - 12149.366: 15.7514% ( 51) 00:07:51.707 12149.366 - 12199.778: 16.1535% ( 35) 00:07:51.707 12199.778 - 12250.191: 16.4752% ( 28) 00:07:51.707 12250.191 - 12300.603: 16.8773% ( 35) 00:07:51.707 12300.603 - 12351.015: 17.2220% ( 30) 00:07:51.707 12351.015 - 12401.428: 17.8998% ( 59) 00:07:51.707 12401.428 - 12451.840: 18.5317% ( 55) 00:07:51.707 12451.840 - 12502.252: 19.0832% ( 48) 00:07:51.707 12502.252 - 12552.665: 19.6232% ( 47) 00:07:51.707 12552.665 - 12603.077: 20.0942% ( 41) 00:07:51.707 12603.077 - 12653.489: 20.6687% ( 50) 00:07:51.707 12653.489 - 12703.902: 21.2201% ( 48) 00:07:51.707 12703.902 - 12754.314: 21.7601% ( 47) 00:07:51.707 12754.314 - 12804.726: 22.6103% ( 74) 00:07:51.707 12804.726 - 12855.138: 23.1273% ( 45) 00:07:51.707 12855.138 - 12905.551: 23.7017% ( 50) 00:07:51.707 12905.551 - 13006.375: 25.1953% ( 130) 00:07:51.707 13006.375 - 13107.200: 26.9761% ( 155) 00:07:51.707 13107.200 - 13208.025: 28.8258% ( 161) 00:07:51.707 13208.025 - 13308.849: 30.5147% ( 147) 00:07:51.707 13308.849 - 13409.674: 31.8704% ( 118) 00:07:51.707 13409.674 - 13510.498: 33.3065% ( 125) 00:07:51.707 13510.498 - 13611.323: 34.4554% ( 100) 00:07:51.707 13611.323 - 13712.148: 35.3860% ( 81) 00:07:51.707 13712.148 - 13812.972: 36.3971% ( 88) 00:07:51.707 13812.972 - 13913.797: 37.4540% ( 92) 00:07:51.707 13913.797 - 14014.622: 38.3502% ( 78) 00:07:51.707 14014.622 - 14115.446: 39.7403% ( 121) 00:07:51.707 14115.446 - 14216.271: 41.5097% ( 154) 00:07:51.707 14216.271 - 14317.095: 43.3824% ( 163) 00:07:51.707 14317.095 - 14417.920: 45.6572% ( 198) 00:07:51.707 14417.920 - 14518.745: 48.1273% ( 215) 00:07:51.707 14518.745 - 14619.569: 50.0000% ( 163) 00:07:51.707 14619.569 - 14720.394: 52.5046% ( 218) 00:07:51.707 14720.394 - 14821.218: 54.3773% ( 163) 00:07:51.707 14821.218 - 14922.043: 56.2845% ( 166) 00:07:51.707 14922.043 - 15022.868: 58.0767% ( 156) 00:07:51.707 15022.868 - 15123.692: 59.8575% ( 155) 00:07:51.707 15123.692 - 15224.517: 61.8451% ( 173) 00:07:51.707 15224.517 - 15325.342: 64.4646% ( 228) 00:07:51.707 15325.342 - 15426.166: 66.9692% ( 218) 00:07:51.707 15426.166 - 15526.991: 69.3244% ( 205) 00:07:51.707 15526.991 - 15627.815: 71.4959% ( 189) 00:07:51.708 15627.815 - 15728.640: 73.7707% ( 198) 00:07:51.708 15728.640 - 15829.465: 76.0110% ( 195) 00:07:51.708 15829.465 - 15930.289: 77.8148% ( 157) 00:07:51.708 15930.289 - 16031.114: 79.9173% ( 183) 00:07:51.708 16031.114 - 16131.938: 81.9049% ( 173) 00:07:51.708 16131.938 - 16232.763: 83.7431% ( 160) 00:07:51.708 16232.763 - 16333.588: 85.5469% ( 157) 00:07:51.708 16333.588 - 16434.412: 86.8566% ( 114) 00:07:51.708 16434.412 - 16535.237: 88.2238% ( 119) 00:07:51.708 16535.237 - 16636.062: 89.8438% ( 141) 00:07:51.708 16636.062 - 16736.886: 90.9467% ( 96) 00:07:51.708 16736.886 - 16837.711: 91.5211% ( 50) 00:07:51.708 16837.711 - 16938.535: 91.9462% ( 37) 00:07:51.708 16938.535 - 17039.360: 92.3024% ( 31) 00:07:51.708 17039.360 - 17140.185: 92.5781% ( 24) 00:07:51.708 17140.185 - 17241.009: 92.9228% ( 30) 00:07:51.708 17241.009 - 17341.834: 93.0722% ( 13) 00:07:51.708 17341.834 - 17442.658: 93.1870% ( 10) 00:07:51.708 17442.658 - 17543.483: 93.3019% ( 10) 00:07:51.708 17543.483 - 17644.308: 93.4743% ( 15) 00:07:51.708 17644.308 - 17745.132: 93.7385% ( 23) 00:07:51.708 17745.132 - 17845.957: 94.0487% ( 27) 00:07:51.708 17845.957 - 17946.782: 94.1751% ( 11) 00:07:51.708 17946.782 - 18047.606: 94.2785% ( 9) 00:07:51.708 18047.606 - 18148.431: 94.4393% ( 14) 00:07:51.708 18148.431 - 18249.255: 94.6576% ( 19) 00:07:51.708 18249.255 - 18350.080: 95.1287% ( 41) 00:07:51.708 18350.080 - 18450.905: 95.7950% ( 58) 00:07:51.708 18450.905 - 18551.729: 96.1857% ( 34) 00:07:51.708 18551.729 - 18652.554: 96.4499% ( 23) 00:07:51.708 18652.554 - 18753.378: 96.5878% ( 12) 00:07:51.708 18753.378 - 18854.203: 96.6682% ( 7) 00:07:51.708 18854.203 - 18955.028: 96.7716% ( 9) 00:07:51.708 18955.028 - 19055.852: 96.8635% ( 8) 00:07:51.708 19055.852 - 19156.677: 96.9095% ( 4) 00:07:51.708 19156.677 - 19257.502: 96.9554% ( 4) 00:07:51.708 19257.502 - 19358.326: 97.0129% ( 5) 00:07:51.708 19358.326 - 19459.151: 97.0588% ( 4) 00:07:51.708 19963.274 - 20064.098: 97.0933% ( 3) 00:07:51.708 20064.098 - 20164.923: 97.1622% ( 6) 00:07:51.708 20164.923 - 20265.748: 97.3001% ( 12) 00:07:51.708 20265.748 - 20366.572: 97.5069% ( 18) 00:07:51.708 20366.572 - 20467.397: 97.6677% ( 14) 00:07:51.708 20467.397 - 20568.222: 97.8286% ( 14) 00:07:51.708 20568.222 - 20669.046: 97.9205% ( 8) 00:07:51.708 20669.046 - 20769.871: 97.9894% ( 6) 00:07:51.708 20769.871 - 20870.695: 98.0584% ( 6) 00:07:51.708 20870.695 - 20971.520: 98.1273% ( 6) 00:07:51.708 20971.520 - 21072.345: 98.1962% ( 6) 00:07:51.708 21072.345 - 21173.169: 98.2192% ( 2) 00:07:51.708 21173.169 - 21273.994: 98.3111% ( 8) 00:07:51.708 21273.994 - 21374.818: 98.4375% ( 11) 00:07:51.708 21374.818 - 21475.643: 98.5064% ( 6) 00:07:51.708 21475.643 - 21576.468: 98.5294% ( 2) 00:07:51.708 32062.228 - 32263.877: 98.5869% ( 5) 00:07:51.708 32263.877 - 32465.526: 98.7132% ( 11) 00:07:51.708 32465.526 - 32667.175: 98.7822% ( 6) 00:07:51.708 32667.175 - 32868.825: 98.8396% ( 5) 00:07:51.708 32868.825 - 33070.474: 98.8971% ( 5) 00:07:51.708 33070.474 - 33272.123: 98.9660% ( 6) 00:07:51.708 33272.123 - 33473.772: 99.0464% ( 7) 00:07:51.708 33473.772 - 33675.422: 99.1268% ( 7) 00:07:51.708 33675.422 - 33877.071: 99.2073% ( 7) 00:07:51.708 33877.071 - 34078.720: 99.2647% ( 5) 00:07:51.708 39926.548 - 40128.197: 99.3107% ( 4) 00:07:51.708 40128.197 - 40329.846: 99.4026% ( 8) 00:07:51.708 40329.846 - 40531.495: 99.5175% ( 10) 00:07:51.708 40531.495 - 40733.145: 99.5634% ( 4) 00:07:51.708 40934.794 - 41136.443: 99.5864% ( 2) 00:07:51.708 41136.443 - 41338.092: 99.6324% ( 4) 00:07:51.708 41338.092 - 41539.742: 99.6553% ( 2) 00:07:51.708 41539.742 - 41741.391: 99.7013% ( 4) 00:07:51.708 41741.391 - 41943.040: 99.7587% ( 5) 00:07:51.708 41943.040 - 42144.689: 99.8392% ( 7) 00:07:51.708 42144.689 - 42346.338: 99.9196% ( 7) 00:07:51.708 42346.338 - 42547.988: 100.0000% ( 7) 00:07:51.708 00:07:51.708 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:51.708 ============================================================================== 00:07:51.708 Range in us Cumulative IO count 00:07:51.708 10183.286 - 10233.698: 0.0804% ( 7) 00:07:51.708 10233.698 - 10284.111: 0.1264% ( 4) 00:07:51.708 10284.111 - 10334.523: 0.1953% ( 6) 00:07:51.708 10334.523 - 10384.935: 0.4021% ( 18) 00:07:51.708 10384.935 - 10435.348: 0.5285% ( 11) 00:07:51.708 10435.348 - 10485.760: 0.7353% ( 18) 00:07:51.708 10485.760 - 10536.172: 0.9651% ( 20) 00:07:51.708 10536.172 - 10586.585: 1.1719% ( 18) 00:07:51.708 10586.585 - 10636.997: 1.3557% ( 16) 00:07:51.708 10636.997 - 10687.409: 1.5395% ( 16) 00:07:51.708 10687.409 - 10737.822: 1.7578% ( 19) 00:07:51.708 10737.822 - 10788.234: 1.9646% ( 18) 00:07:51.708 10788.234 - 10838.646: 2.1944% ( 20) 00:07:51.708 10838.646 - 10889.058: 2.7114% ( 45) 00:07:51.708 10889.058 - 10939.471: 3.1939% ( 42) 00:07:51.708 10939.471 - 10989.883: 3.6305% ( 38) 00:07:51.708 10989.883 - 11040.295: 3.8373% ( 18) 00:07:51.708 11040.295 - 11090.708: 4.1705% ( 29) 00:07:51.708 11090.708 - 11141.120: 4.5381% ( 32) 00:07:51.708 11141.120 - 11191.532: 5.2390% ( 61) 00:07:51.708 11191.532 - 11241.945: 5.5951% ( 31) 00:07:51.708 11241.945 - 11292.357: 5.9628% ( 32) 00:07:51.708 11292.357 - 11342.769: 6.3764% ( 36) 00:07:51.708 11342.769 - 11393.182: 6.8474% ( 41) 00:07:51.708 11393.182 - 11443.594: 7.4334% ( 51) 00:07:51.708 11443.594 - 11494.006: 7.9504% ( 45) 00:07:51.708 11494.006 - 11544.418: 8.8695% ( 80) 00:07:51.708 11544.418 - 11594.831: 9.6622% ( 69) 00:07:51.708 11594.831 - 11645.243: 10.3286% ( 58) 00:07:51.708 11645.243 - 11695.655: 10.7422% ( 36) 00:07:51.708 11695.655 - 11746.068: 11.2937% ( 48) 00:07:51.708 11746.068 - 11796.480: 11.7073% ( 36) 00:07:51.708 11796.480 - 11846.892: 12.2932% ( 51) 00:07:51.708 11846.892 - 11897.305: 12.8332% ( 47) 00:07:51.708 11897.305 - 11947.717: 13.4536% ( 54) 00:07:51.708 11947.717 - 11998.129: 14.0625% ( 53) 00:07:51.708 11998.129 - 12048.542: 14.8093% ( 65) 00:07:51.708 12048.542 - 12098.954: 15.5446% ( 64) 00:07:51.708 12098.954 - 12149.366: 16.2454% ( 61) 00:07:51.708 12149.366 - 12199.778: 16.9003% ( 57) 00:07:51.708 12199.778 - 12250.191: 17.4173% ( 45) 00:07:51.708 12250.191 - 12300.603: 17.7619% ( 30) 00:07:51.708 12300.603 - 12351.015: 18.2904% ( 46) 00:07:51.708 12351.015 - 12401.428: 18.8764% ( 51) 00:07:51.708 12401.428 - 12451.840: 19.7381% ( 75) 00:07:51.708 12451.840 - 12502.252: 20.6112% ( 76) 00:07:51.708 12502.252 - 12552.665: 21.0018% ( 34) 00:07:51.708 12552.665 - 12603.077: 21.4154% ( 36) 00:07:51.708 12603.077 - 12653.489: 21.8750% ( 40) 00:07:51.708 12653.489 - 12703.902: 22.2656% ( 34) 00:07:51.708 12703.902 - 12754.314: 22.8401% ( 50) 00:07:51.708 12754.314 - 12804.726: 23.2996% ( 40) 00:07:51.708 12804.726 - 12855.138: 23.8741% ( 50) 00:07:51.708 12855.138 - 12905.551: 24.4945% ( 54) 00:07:51.708 12905.551 - 13006.375: 25.8042% ( 114) 00:07:51.708 13006.375 - 13107.200: 26.8038% ( 87) 00:07:51.708 13107.200 - 13208.025: 28.0790% ( 111) 00:07:51.708 13208.025 - 13308.849: 29.2739% ( 104) 00:07:51.708 13308.849 - 13409.674: 30.4343% ( 101) 00:07:51.708 13409.674 - 13510.498: 31.6636% ( 107) 00:07:51.708 13510.498 - 13611.323: 33.1916% ( 133) 00:07:51.708 13611.323 - 13712.148: 34.5703% ( 120) 00:07:51.708 13712.148 - 13812.972: 36.2132% ( 143) 00:07:51.708 13812.972 - 13913.797: 37.5345% ( 115) 00:07:51.708 13913.797 - 14014.622: 39.1659% ( 142) 00:07:51.708 14014.622 - 14115.446: 41.0960% ( 168) 00:07:51.708 14115.446 - 14216.271: 43.1526% ( 179) 00:07:51.708 14216.271 - 14317.095: 44.7495% ( 139) 00:07:51.708 14317.095 - 14417.920: 46.3580% ( 140) 00:07:51.708 14417.920 - 14518.745: 48.4030% ( 178) 00:07:51.708 14518.745 - 14619.569: 50.3447% ( 169) 00:07:51.708 14619.569 - 14720.394: 52.9297% ( 225) 00:07:51.708 14720.394 - 14821.218: 54.9977% ( 180) 00:07:51.708 14821.218 - 14922.043: 56.8474% ( 161) 00:07:51.708 14922.043 - 15022.868: 58.8350% ( 173) 00:07:51.708 15022.868 - 15123.692: 61.3051% ( 215) 00:07:51.708 15123.692 - 15224.517: 63.9936% ( 234) 00:07:51.708 15224.517 - 15325.342: 65.8548% ( 162) 00:07:51.708 15325.342 - 15426.166: 67.7390% ( 164) 00:07:51.708 15426.166 - 15526.991: 69.9104% ( 189) 00:07:51.708 15526.991 - 15627.815: 72.0933% ( 190) 00:07:51.708 15627.815 - 15728.640: 73.9545% ( 162) 00:07:51.708 15728.640 - 15829.465: 75.5055% ( 135) 00:07:51.708 15829.465 - 15930.289: 77.2518% ( 152) 00:07:51.708 15930.289 - 16031.114: 78.7454% ( 130) 00:07:51.708 16031.114 - 16131.938: 80.2275% ( 129) 00:07:51.708 16131.938 - 16232.763: 81.6062% ( 120) 00:07:51.708 16232.763 - 16333.588: 83.1227% ( 132) 00:07:51.708 16333.588 - 16434.412: 84.7312% ( 140) 00:07:51.708 16434.412 - 16535.237: 86.1558% ( 124) 00:07:51.708 16535.237 - 16636.062: 86.9830% ( 72) 00:07:51.708 16636.062 - 16736.886: 87.9481% ( 84) 00:07:51.708 16736.886 - 16837.711: 88.9361% ( 86) 00:07:51.708 16837.711 - 16938.535: 89.7174% ( 68) 00:07:51.708 16938.535 - 17039.360: 90.5331% ( 71) 00:07:51.708 17039.360 - 17140.185: 91.2569% ( 63) 00:07:51.708 17140.185 - 17241.009: 91.9692% ( 62) 00:07:51.708 17241.009 - 17341.834: 92.4747% ( 44) 00:07:51.708 17341.834 - 17442.658: 92.8768% ( 35) 00:07:51.708 17442.658 - 17543.483: 93.2560% ( 33) 00:07:51.708 17543.483 - 17644.308: 93.6351% ( 33) 00:07:51.708 17644.308 - 17745.132: 94.2555% ( 54) 00:07:51.708 17745.132 - 17845.957: 94.6002% ( 30) 00:07:51.708 17845.957 - 17946.782: 94.9219% ( 28) 00:07:51.708 17946.782 - 18047.606: 95.1402% ( 19) 00:07:51.708 18047.606 - 18148.431: 95.4159% ( 24) 00:07:51.708 18148.431 - 18249.255: 95.5653% ( 13) 00:07:51.708 18249.255 - 18350.080: 95.7261% ( 14) 00:07:51.708 18350.080 - 18450.905: 95.8525% ( 11) 00:07:51.708 18450.905 - 18551.729: 96.0248% ( 15) 00:07:51.709 18551.729 - 18652.554: 96.1742% ( 13) 00:07:51.709 18652.554 - 18753.378: 96.3006% ( 11) 00:07:51.709 18753.378 - 18854.203: 96.4614% ( 14) 00:07:51.709 18854.203 - 18955.028: 96.5878% ( 11) 00:07:51.709 18955.028 - 19055.852: 96.6567% ( 6) 00:07:51.709 19055.852 - 19156.677: 96.7142% ( 5) 00:07:51.709 19156.677 - 19257.502: 96.7716% ( 5) 00:07:51.709 19257.502 - 19358.326: 96.8176% ( 4) 00:07:51.709 19358.326 - 19459.151: 96.8635% ( 4) 00:07:51.709 19459.151 - 19559.975: 96.9210% ( 5) 00:07:51.709 19559.975 - 19660.800: 96.9669% ( 4) 00:07:51.709 19660.800 - 19761.625: 97.0588% ( 8) 00:07:51.709 19761.625 - 19862.449: 97.2197% ( 14) 00:07:51.709 19862.449 - 19963.274: 97.4724% ( 22) 00:07:51.709 19963.274 - 20064.098: 97.5528% ( 7) 00:07:51.709 20064.098 - 20164.923: 97.6103% ( 5) 00:07:51.709 20164.923 - 20265.748: 97.7252% ( 10) 00:07:51.709 20265.748 - 20366.572: 97.9550% ( 20) 00:07:51.709 20366.572 - 20467.397: 98.0354% ( 7) 00:07:51.709 20467.397 - 20568.222: 98.1273% ( 8) 00:07:51.709 20568.222 - 20669.046: 98.2537% ( 11) 00:07:51.709 20669.046 - 20769.871: 98.4145% ( 14) 00:07:51.709 20769.871 - 20870.695: 98.4835% ( 6) 00:07:51.709 20870.695 - 20971.520: 98.5294% ( 4) 00:07:51.709 29440.788 - 29642.437: 98.5409% ( 1) 00:07:51.709 29642.437 - 29844.086: 98.5983% ( 5) 00:07:51.709 29844.086 - 30045.735: 98.6673% ( 6) 00:07:51.709 30045.735 - 30247.385: 98.7132% ( 4) 00:07:51.709 30247.385 - 30449.034: 98.7822% ( 6) 00:07:51.709 30449.034 - 30650.683: 98.8741% ( 8) 00:07:51.709 30650.683 - 30852.332: 98.9430% ( 6) 00:07:51.709 30852.332 - 31053.982: 99.0119% ( 6) 00:07:51.709 31053.982 - 31255.631: 99.0694% ( 5) 00:07:51.709 31255.631 - 31457.280: 99.1613% ( 8) 00:07:51.709 31457.280 - 31658.929: 99.2417% ( 7) 00:07:51.709 31658.929 - 31860.578: 99.2647% ( 2) 00:07:51.709 39523.249 - 39724.898: 99.2877% ( 2) 00:07:51.709 39724.898 - 39926.548: 99.3796% ( 8) 00:07:51.709 39926.548 - 40128.197: 99.4485% ( 6) 00:07:51.709 40128.197 - 40329.846: 99.5175% ( 6) 00:07:51.709 40329.846 - 40531.495: 99.5979% ( 7) 00:07:51.709 40531.495 - 40733.145: 99.6668% ( 6) 00:07:51.709 40733.145 - 40934.794: 99.7472% ( 7) 00:07:51.709 40934.794 - 41136.443: 99.8277% ( 7) 00:07:51.709 41136.443 - 41338.092: 99.8966% ( 6) 00:07:51.709 41338.092 - 41539.742: 99.9770% ( 7) 00:07:51.709 41539.742 - 41741.391: 100.0000% ( 2) 00:07:51.709 00:07:51.709 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:51.709 ============================================================================== 00:07:51.709 Range in us Cumulative IO count 00:07:51.709 9981.637 - 10032.049: 0.0115% ( 1) 00:07:51.709 10183.286 - 10233.698: 0.0230% ( 1) 00:07:51.709 10233.698 - 10284.111: 0.0689% ( 4) 00:07:51.709 10284.111 - 10334.523: 0.1608% ( 8) 00:07:51.709 10334.523 - 10384.935: 0.2183% ( 5) 00:07:51.709 10384.935 - 10435.348: 0.2987% ( 7) 00:07:51.709 10435.348 - 10485.760: 0.4021% ( 9) 00:07:51.709 10485.760 - 10536.172: 0.4825% ( 7) 00:07:51.709 10536.172 - 10586.585: 0.5400% ( 5) 00:07:51.709 10586.585 - 10636.997: 0.6664% ( 11) 00:07:51.709 10636.997 - 10687.409: 0.7698% ( 9) 00:07:51.709 10687.409 - 10737.822: 0.8617% ( 8) 00:07:51.709 10737.822 - 10788.234: 1.0225% ( 14) 00:07:51.709 10788.234 - 10838.646: 1.1949% ( 15) 00:07:51.709 10838.646 - 10889.058: 1.3557% ( 14) 00:07:51.709 10889.058 - 10939.471: 1.6544% ( 26) 00:07:51.709 10939.471 - 10989.883: 1.8957% ( 21) 00:07:51.709 10989.883 - 11040.295: 2.1255% ( 20) 00:07:51.709 11040.295 - 11090.708: 2.3782% ( 22) 00:07:51.709 11090.708 - 11141.120: 2.6654% ( 25) 00:07:51.709 11141.120 - 11191.532: 3.0561% ( 34) 00:07:51.709 11191.532 - 11241.945: 3.5386% ( 42) 00:07:51.709 11241.945 - 11292.357: 4.0211% ( 42) 00:07:51.709 11292.357 - 11342.769: 4.6990% ( 59) 00:07:51.709 11342.769 - 11393.182: 5.4573% ( 66) 00:07:51.709 11393.182 - 11443.594: 6.0317% ( 50) 00:07:51.709 11443.594 - 11494.006: 6.8589% ( 72) 00:07:51.709 11494.006 - 11544.418: 7.5023% ( 56) 00:07:51.709 11544.418 - 11594.831: 8.3065% ( 70) 00:07:51.709 11594.831 - 11645.243: 8.9154% ( 53) 00:07:51.709 11645.243 - 11695.655: 9.5129% ( 52) 00:07:51.709 11695.655 - 11746.068: 10.2711% ( 66) 00:07:51.709 11746.068 - 11796.480: 11.1213% ( 74) 00:07:51.709 11796.480 - 11846.892: 12.0634% ( 82) 00:07:51.709 11846.892 - 11897.305: 13.0859% ( 89) 00:07:51.709 11897.305 - 11947.717: 14.0051% ( 80) 00:07:51.709 11947.717 - 11998.129: 14.7174% ( 62) 00:07:51.709 11998.129 - 12048.542: 15.4756% ( 66) 00:07:51.709 12048.542 - 12098.954: 16.1994% ( 63) 00:07:51.709 12098.954 - 12149.366: 16.8888% ( 60) 00:07:51.709 12149.366 - 12199.778: 17.6011% ( 62) 00:07:51.709 12199.778 - 12250.191: 18.3134% ( 62) 00:07:51.709 12250.191 - 12300.603: 18.9913% ( 59) 00:07:51.709 12300.603 - 12351.015: 19.5427% ( 48) 00:07:51.709 12351.015 - 12401.428: 20.1861% ( 56) 00:07:51.709 12401.428 - 12451.840: 20.8984% ( 62) 00:07:51.709 12451.840 - 12502.252: 21.4729% ( 50) 00:07:51.709 12502.252 - 12552.665: 22.2426% ( 67) 00:07:51.709 12552.665 - 12603.077: 22.9320% ( 60) 00:07:51.709 12603.077 - 12653.489: 23.5179% ( 51) 00:07:51.709 12653.489 - 12703.902: 24.0924% ( 50) 00:07:51.709 12703.902 - 12754.314: 24.6094% ( 45) 00:07:51.709 12754.314 - 12804.726: 25.2528% ( 56) 00:07:51.709 12804.726 - 12855.138: 25.6549% ( 35) 00:07:51.709 12855.138 - 12905.551: 26.1144% ( 40) 00:07:51.709 12905.551 - 13006.375: 27.0221% ( 79) 00:07:51.709 13006.375 - 13107.200: 28.2514% ( 107) 00:07:51.709 13107.200 - 13208.025: 29.1131% ( 75) 00:07:51.709 13208.025 - 13308.849: 29.9747% ( 75) 00:07:51.709 13308.849 - 13409.674: 30.7904% ( 71) 00:07:51.709 13409.674 - 13510.498: 31.7900% ( 87) 00:07:51.709 13510.498 - 13611.323: 32.9848% ( 104) 00:07:51.709 13611.323 - 13712.148: 34.1912% ( 105) 00:07:51.709 13712.148 - 13812.972: 35.4205% ( 107) 00:07:51.709 13812.972 - 13913.797: 36.6498% ( 107) 00:07:51.709 13913.797 - 14014.622: 37.8102% ( 101) 00:07:51.709 14014.622 - 14115.446: 39.0740% ( 110) 00:07:51.709 14115.446 - 14216.271: 40.5561% ( 129) 00:07:51.709 14216.271 - 14317.095: 42.3254% ( 154) 00:07:51.709 14317.095 - 14417.920: 44.2210% ( 165) 00:07:51.709 14417.920 - 14518.745: 46.6452% ( 211) 00:07:51.709 14518.745 - 14619.569: 49.1153% ( 215) 00:07:51.709 14619.569 - 14720.394: 51.5970% ( 216) 00:07:51.709 14720.394 - 14821.218: 53.6650% ( 180) 00:07:51.709 14821.218 - 14922.043: 56.3534% ( 234) 00:07:51.709 14922.043 - 15022.868: 58.8465% ( 217) 00:07:51.709 15022.868 - 15123.692: 60.7881% ( 169) 00:07:51.709 15123.692 - 15224.517: 62.9940% ( 192) 00:07:51.709 15224.517 - 15325.342: 65.0506% ( 179) 00:07:51.709 15325.342 - 15426.166: 67.6011% ( 222) 00:07:51.709 15426.166 - 15526.991: 70.2091% ( 227) 00:07:51.709 15526.991 - 15627.815: 72.8056% ( 226) 00:07:51.709 15627.815 - 15728.640: 74.9540% ( 187) 00:07:51.709 15728.640 - 15829.465: 76.9991% ( 178) 00:07:51.709 15829.465 - 15930.289: 78.9522% ( 170) 00:07:51.709 15930.289 - 16031.114: 81.0547% ( 183) 00:07:51.709 16031.114 - 16131.938: 82.9159% ( 162) 00:07:51.709 16131.938 - 16232.763: 84.6278% ( 149) 00:07:51.709 16232.763 - 16333.588: 85.6733% ( 91) 00:07:51.709 16333.588 - 16434.412: 86.4315% ( 66) 00:07:51.709 16434.412 - 16535.237: 87.4196% ( 86) 00:07:51.709 16535.237 - 16636.062: 88.2123% ( 69) 00:07:51.709 16636.062 - 16736.886: 88.9361% ( 63) 00:07:51.709 16736.886 - 16837.711: 89.6944% ( 66) 00:07:51.709 16837.711 - 16938.535: 90.1540% ( 40) 00:07:51.709 16938.535 - 17039.360: 90.5676% ( 36) 00:07:51.709 17039.360 - 17140.185: 91.1650% ( 52) 00:07:51.709 17140.185 - 17241.009: 91.5901% ( 37) 00:07:51.709 17241.009 - 17341.834: 92.1875% ( 52) 00:07:51.709 17341.834 - 17442.658: 92.7275% ( 47) 00:07:51.709 17442.658 - 17543.483: 92.9458% ( 19) 00:07:51.709 17543.483 - 17644.308: 93.1870% ( 21) 00:07:51.709 17644.308 - 17745.132: 93.4743% ( 25) 00:07:51.709 17745.132 - 17845.957: 93.7040% ( 20) 00:07:51.709 17845.957 - 17946.782: 94.0717% ( 32) 00:07:51.709 17946.782 - 18047.606: 94.3244% ( 22) 00:07:51.709 18047.606 - 18148.431: 94.7610% ( 38) 00:07:51.709 18148.431 - 18249.255: 95.2436% ( 42) 00:07:51.709 18249.255 - 18350.080: 95.6687% ( 37) 00:07:51.709 18350.080 - 18450.905: 96.0593% ( 34) 00:07:51.709 18450.905 - 18551.729: 96.3580% ( 26) 00:07:51.709 18551.729 - 18652.554: 96.4959% ( 12) 00:07:51.709 18652.554 - 18753.378: 96.6337% ( 12) 00:07:51.709 18753.378 - 18854.203: 96.9784% ( 30) 00:07:51.709 18854.203 - 18955.028: 97.1392% ( 14) 00:07:51.709 18955.028 - 19055.852: 97.3001% ( 14) 00:07:51.709 19055.852 - 19156.677: 97.4380% ( 12) 00:07:51.709 19156.677 - 19257.502: 97.5184% ( 7) 00:07:51.709 19257.502 - 19358.326: 97.5758% ( 5) 00:07:51.709 19358.326 - 19459.151: 97.6333% ( 5) 00:07:51.709 19459.151 - 19559.975: 97.6907% ( 5) 00:07:51.709 19559.975 - 19660.800: 97.7597% ( 6) 00:07:51.709 19660.800 - 19761.625: 97.7941% ( 3) 00:07:51.709 20164.923 - 20265.748: 97.8056% ( 1) 00:07:51.709 20265.748 - 20366.572: 97.8286% ( 2) 00:07:51.709 20366.572 - 20467.397: 97.9090% ( 7) 00:07:51.709 20467.397 - 20568.222: 98.0009% ( 8) 00:07:51.709 20568.222 - 20669.046: 98.0928% ( 8) 00:07:51.709 20669.046 - 20769.871: 98.3571% ( 23) 00:07:51.709 20769.871 - 20870.695: 98.4605% ( 9) 00:07:51.709 20870.695 - 20971.520: 98.5064% ( 4) 00:07:51.709 20971.520 - 21072.345: 98.5294% ( 2) 00:07:51.709 28029.243 - 28230.892: 98.5754% ( 4) 00:07:51.709 28230.892 - 28432.542: 98.6558% ( 7) 00:07:51.709 28432.542 - 28634.191: 98.7362% ( 7) 00:07:51.709 28634.191 - 28835.840: 98.8281% ( 8) 00:07:51.709 28835.840 - 29037.489: 98.9085% ( 7) 00:07:51.709 29037.489 - 29239.138: 98.9890% ( 7) 00:07:51.709 29239.138 - 29440.788: 99.0809% ( 8) 00:07:51.710 29440.788 - 29642.437: 99.1613% ( 7) 00:07:51.710 29642.437 - 29844.086: 99.2417% ( 7) 00:07:51.710 29844.086 - 30045.735: 99.2647% ( 2) 00:07:51.710 37708.406 - 37910.055: 99.2992% ( 3) 00:07:51.710 37910.055 - 38111.705: 99.3681% ( 6) 00:07:51.710 38111.705 - 38313.354: 99.4485% ( 7) 00:07:51.710 38313.354 - 38515.003: 99.5290% ( 7) 00:07:51.710 38515.003 - 38716.652: 99.6094% ( 7) 00:07:51.710 38716.652 - 38918.302: 99.6898% ( 7) 00:07:51.710 38918.302 - 39119.951: 99.7702% ( 7) 00:07:51.710 39119.951 - 39321.600: 99.8506% ( 7) 00:07:51.710 39321.600 - 39523.249: 99.9426% ( 8) 00:07:51.710 39523.249 - 39724.898: 100.0000% ( 5) 00:07:51.710 00:07:51.710 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:51.710 ============================================================================== 00:07:51.710 Range in us Cumulative IO count 00:07:51.710 9679.163 - 9729.575: 0.0345% ( 3) 00:07:51.710 9729.575 - 9779.988: 0.0460% ( 1) 00:07:51.710 9779.988 - 9830.400: 0.0919% ( 4) 00:07:51.710 9830.400 - 9880.812: 0.1149% ( 2) 00:07:51.710 9880.812 - 9931.225: 0.1264% ( 1) 00:07:51.710 9931.225 - 9981.637: 0.1838% ( 5) 00:07:51.710 9981.637 - 10032.049: 0.2642% ( 7) 00:07:51.710 10032.049 - 10082.462: 0.3562% ( 8) 00:07:51.710 10082.462 - 10132.874: 0.4710% ( 10) 00:07:51.710 10132.874 - 10183.286: 0.5859% ( 10) 00:07:51.710 10183.286 - 10233.698: 0.6893% ( 9) 00:07:51.710 10233.698 - 10284.111: 0.7812% ( 8) 00:07:51.710 10284.111 - 10334.523: 0.9421% ( 14) 00:07:51.710 10334.523 - 10384.935: 1.1029% ( 14) 00:07:51.710 10384.935 - 10435.348: 1.2523% ( 13) 00:07:51.710 10435.348 - 10485.760: 1.4821% ( 20) 00:07:51.710 10485.760 - 10536.172: 1.5970% ( 10) 00:07:51.710 10536.172 - 10586.585: 1.7004% ( 9) 00:07:51.710 10586.585 - 10636.997: 1.7808% ( 7) 00:07:51.710 10636.997 - 10687.409: 1.8612% ( 7) 00:07:51.710 10687.409 - 10737.822: 1.9531% ( 8) 00:07:51.710 10737.822 - 10788.234: 2.0221% ( 6) 00:07:51.710 10788.234 - 10838.646: 2.1255% ( 9) 00:07:51.710 10838.646 - 10889.058: 2.2518% ( 11) 00:07:51.710 10889.058 - 10939.471: 2.4816% ( 20) 00:07:51.710 10939.471 - 10989.883: 2.8493% ( 32) 00:07:51.710 10989.883 - 11040.295: 3.1480% ( 26) 00:07:51.710 11040.295 - 11090.708: 3.3548% ( 18) 00:07:51.710 11090.708 - 11141.120: 3.6305% ( 24) 00:07:51.710 11141.120 - 11191.532: 3.9752% ( 30) 00:07:51.710 11191.532 - 11241.945: 4.2854% ( 27) 00:07:51.710 11241.945 - 11292.357: 4.6645% ( 33) 00:07:51.710 11292.357 - 11342.769: 5.1356% ( 41) 00:07:51.710 11342.769 - 11393.182: 5.7560% ( 54) 00:07:51.710 11393.182 - 11443.594: 6.3764% ( 54) 00:07:51.710 11443.594 - 11494.006: 6.9853% ( 53) 00:07:51.710 11494.006 - 11544.418: 7.7436% ( 66) 00:07:51.710 11544.418 - 11594.831: 8.4099% ( 58) 00:07:51.710 11594.831 - 11645.243: 9.1452% ( 64) 00:07:51.710 11645.243 - 11695.655: 9.8805% ( 64) 00:07:51.710 11695.655 - 11746.068: 10.5009% ( 54) 00:07:51.710 11746.068 - 11796.480: 11.3396% ( 73) 00:07:51.710 11796.480 - 11846.892: 12.0290% ( 60) 00:07:51.710 11846.892 - 11897.305: 12.9251% ( 78) 00:07:51.710 11897.305 - 11947.717: 14.1085% ( 103) 00:07:51.710 11947.717 - 11998.129: 15.3952% ( 112) 00:07:51.710 11998.129 - 12048.542: 16.8199% ( 124) 00:07:51.710 12048.542 - 12098.954: 17.5437% ( 63) 00:07:51.710 12098.954 - 12149.366: 18.1526% ( 53) 00:07:51.710 12149.366 - 12199.778: 18.8994% ( 65) 00:07:51.710 12199.778 - 12250.191: 19.7151% ( 71) 00:07:51.710 12250.191 - 12300.603: 20.5997% ( 77) 00:07:51.710 12300.603 - 12351.015: 21.1742% ( 50) 00:07:51.710 12351.015 - 12401.428: 21.6797% ( 44) 00:07:51.710 12401.428 - 12451.840: 22.1163% ( 38) 00:07:51.710 12451.840 - 12502.252: 22.7022% ( 51) 00:07:51.710 12502.252 - 12552.665: 23.1618% ( 40) 00:07:51.710 12552.665 - 12603.077: 23.7017% ( 47) 00:07:51.710 12603.077 - 12653.489: 24.2877% ( 51) 00:07:51.710 12653.489 - 12703.902: 24.6898% ( 35) 00:07:51.710 12703.902 - 12754.314: 25.2642% ( 50) 00:07:51.710 12754.314 - 12804.726: 25.6664% ( 35) 00:07:51.710 12804.726 - 12855.138: 26.1489% ( 42) 00:07:51.710 12855.138 - 12905.551: 26.5625% ( 36) 00:07:51.710 12905.551 - 13006.375: 27.4127% ( 74) 00:07:51.710 13006.375 - 13107.200: 28.1939% ( 68) 00:07:51.710 13107.200 - 13208.025: 29.4462% ( 109) 00:07:51.710 13208.025 - 13308.849: 30.3653% ( 80) 00:07:51.710 13308.849 - 13409.674: 31.5717% ( 105) 00:07:51.710 13409.674 - 13510.498: 32.6976% ( 98) 00:07:51.710 13510.498 - 13611.323: 33.5363% ( 73) 00:07:51.710 13611.323 - 13712.148: 34.3865% ( 74) 00:07:51.710 13712.148 - 13812.972: 35.5009% ( 97) 00:07:51.710 13812.972 - 13913.797: 36.8911% ( 121) 00:07:51.710 13913.797 - 14014.622: 38.1089% ( 106) 00:07:51.710 14014.622 - 14115.446: 39.8208% ( 149) 00:07:51.710 14115.446 - 14216.271: 41.4062% ( 138) 00:07:51.710 14216.271 - 14317.095: 43.3594% ( 170) 00:07:51.710 14317.095 - 14417.920: 45.0597% ( 148) 00:07:51.710 14417.920 - 14518.745: 46.8405% ( 155) 00:07:51.710 14518.745 - 14619.569: 49.2762% ( 212) 00:07:51.710 14619.569 - 14720.394: 51.0225% ( 152) 00:07:51.710 14720.394 - 14821.218: 53.2284% ( 192) 00:07:51.710 14821.218 - 14922.043: 54.9977% ( 154) 00:07:51.710 14922.043 - 15022.868: 57.1232% ( 185) 00:07:51.710 15022.868 - 15123.692: 59.1797% ( 179) 00:07:51.710 15123.692 - 15224.517: 61.7992% ( 228) 00:07:51.710 15224.517 - 15325.342: 64.3612% ( 223) 00:07:51.710 15325.342 - 15426.166: 67.3024% ( 256) 00:07:51.710 15426.166 - 15526.991: 69.6921% ( 208) 00:07:51.710 15526.991 - 15627.815: 71.5303% ( 160) 00:07:51.710 15627.815 - 15728.640: 73.8051% ( 198) 00:07:51.710 15728.640 - 15829.465: 76.2638% ( 214) 00:07:51.710 15829.465 - 15930.289: 78.3433% ( 181) 00:07:51.710 15930.289 - 16031.114: 80.3079% ( 171) 00:07:51.710 16031.114 - 16131.938: 81.9164% ( 140) 00:07:51.710 16131.938 - 16232.763: 83.5708% ( 144) 00:07:51.710 16232.763 - 16333.588: 84.9380% ( 119) 00:07:51.710 16333.588 - 16434.412: 86.0639% ( 98) 00:07:51.710 16434.412 - 16535.237: 86.9600% ( 78) 00:07:51.710 16535.237 - 16636.062: 88.0859% ( 98) 00:07:51.710 16636.062 - 16736.886: 88.8327% ( 65) 00:07:51.710 16736.886 - 16837.711: 89.5106% ( 59) 00:07:51.710 16837.711 - 16938.535: 90.0735% ( 49) 00:07:51.710 16938.535 - 17039.360: 90.5676% ( 43) 00:07:51.710 17039.360 - 17140.185: 90.9926% ( 37) 00:07:51.710 17140.185 - 17241.009: 91.5211% ( 46) 00:07:51.710 17241.009 - 17341.834: 92.2449% ( 63) 00:07:51.710 17341.834 - 17442.658: 92.7275% ( 42) 00:07:51.710 17442.658 - 17543.483: 92.9917% ( 23) 00:07:51.710 17543.483 - 17644.308: 93.1756% ( 16) 00:07:51.710 17644.308 - 17745.132: 93.4398% ( 23) 00:07:51.710 17745.132 - 17845.957: 93.7040% ( 23) 00:07:51.710 17845.957 - 17946.782: 94.0832% ( 33) 00:07:51.710 17946.782 - 18047.606: 94.5312% ( 39) 00:07:51.710 18047.606 - 18148.431: 95.0138% ( 42) 00:07:51.710 18148.431 - 18249.255: 95.6227% ( 53) 00:07:51.710 18249.255 - 18350.080: 96.2201% ( 52) 00:07:51.710 18350.080 - 18450.905: 96.5763% ( 31) 00:07:51.710 18450.905 - 18551.729: 96.7946% ( 19) 00:07:51.710 18551.729 - 18652.554: 96.9095% ( 10) 00:07:51.710 18652.554 - 18753.378: 96.9554% ( 4) 00:07:51.710 18753.378 - 18854.203: 97.0358% ( 7) 00:07:51.710 18854.203 - 18955.028: 97.0703% ( 3) 00:07:51.710 18955.028 - 19055.852: 97.1507% ( 7) 00:07:51.710 19055.852 - 19156.677: 97.2886% ( 12) 00:07:51.710 19156.677 - 19257.502: 97.5069% ( 19) 00:07:51.710 19257.502 - 19358.326: 97.6907% ( 16) 00:07:51.710 19358.326 - 19459.151: 97.7941% ( 9) 00:07:51.710 20064.098 - 20164.923: 97.8286% ( 3) 00:07:51.710 20164.923 - 20265.748: 97.8975% ( 6) 00:07:51.710 20265.748 - 20366.572: 97.9894% ( 8) 00:07:51.710 20366.572 - 20467.397: 98.2422% ( 22) 00:07:51.710 20467.397 - 20568.222: 98.4145% ( 15) 00:07:51.710 20568.222 - 20669.046: 98.4720% ( 5) 00:07:51.710 20669.046 - 20769.871: 98.5294% ( 5) 00:07:51.710 27424.295 - 27625.945: 98.5869% ( 5) 00:07:51.710 27625.945 - 27827.594: 98.6673% ( 7) 00:07:51.710 27827.594 - 28029.243: 98.7477% ( 7) 00:07:51.710 28029.243 - 28230.892: 98.8396% ( 8) 00:07:51.710 28230.892 - 28432.542: 98.9200% ( 7) 00:07:51.710 28432.542 - 28634.191: 98.9890% ( 6) 00:07:51.710 28634.191 - 28835.840: 99.0694% ( 7) 00:07:51.710 28835.840 - 29037.489: 99.1498% ( 7) 00:07:51.710 29037.489 - 29239.138: 99.2417% ( 8) 00:07:51.710 29239.138 - 29440.788: 99.2647% ( 2) 00:07:51.710 37305.108 - 37506.757: 99.3107% ( 4) 00:07:51.710 37506.757 - 37708.406: 99.3796% ( 6) 00:07:51.710 37708.406 - 37910.055: 99.4600% ( 7) 00:07:51.710 37910.055 - 38111.705: 99.5290% ( 6) 00:07:51.710 38111.705 - 38313.354: 99.6209% ( 8) 00:07:51.710 38313.354 - 38515.003: 99.7013% ( 7) 00:07:51.710 38515.003 - 38716.652: 99.7932% ( 8) 00:07:51.710 38716.652 - 38918.302: 99.8736% ( 7) 00:07:51.710 38918.302 - 39119.951: 99.9655% ( 8) 00:07:51.710 39119.951 - 39321.600: 100.0000% ( 3) 00:07:51.710 00:07:51.710 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:51.710 ============================================================================== 00:07:51.710 Range in us Cumulative IO count 00:07:51.710 9779.988 - 9830.400: 0.0345% ( 3) 00:07:51.710 9830.400 - 9880.812: 0.0689% ( 3) 00:07:51.710 9880.812 - 9931.225: 0.1034% ( 3) 00:07:51.710 9931.225 - 9981.637: 0.1953% ( 8) 00:07:51.710 9981.637 - 10032.049: 0.2642% ( 6) 00:07:51.710 10032.049 - 10082.462: 0.3791% ( 10) 00:07:51.710 10082.462 - 10132.874: 0.4940% ( 10) 00:07:51.710 10132.874 - 10183.286: 0.6319% ( 12) 00:07:51.710 10183.286 - 10233.698: 1.0340% ( 35) 00:07:51.710 10233.698 - 10284.111: 1.2063% ( 15) 00:07:51.710 10284.111 - 10334.523: 1.3672% ( 14) 00:07:51.710 10334.523 - 10384.935: 1.5740% ( 18) 00:07:51.710 10384.935 - 10435.348: 1.7119% ( 12) 00:07:51.711 10435.348 - 10485.760: 1.8497% ( 12) 00:07:51.711 10485.760 - 10536.172: 1.9646% ( 10) 00:07:51.711 10536.172 - 10586.585: 2.0565% ( 8) 00:07:51.711 10586.585 - 10636.997: 2.1255% ( 6) 00:07:51.711 10636.997 - 10687.409: 2.2059% ( 7) 00:07:51.711 10687.409 - 10737.822: 2.3093% ( 9) 00:07:51.711 10737.822 - 10788.234: 2.3667% ( 5) 00:07:51.711 10788.234 - 10838.646: 2.4127% ( 4) 00:07:51.711 10838.646 - 10889.058: 2.5620% ( 13) 00:07:51.711 10889.058 - 10939.471: 2.7574% ( 17) 00:07:51.711 10939.471 - 10989.883: 2.8263% ( 6) 00:07:51.711 10989.883 - 11040.295: 2.8837% ( 5) 00:07:51.711 11040.295 - 11090.708: 2.9642% ( 7) 00:07:51.711 11090.708 - 11141.120: 3.0676% ( 9) 00:07:51.711 11141.120 - 11191.532: 3.1939% ( 11) 00:07:51.711 11191.532 - 11241.945: 3.3778% ( 16) 00:07:51.711 11241.945 - 11292.357: 3.5846% ( 18) 00:07:51.711 11292.357 - 11342.769: 3.8488% ( 23) 00:07:51.711 11342.769 - 11393.182: 4.2050% ( 31) 00:07:51.711 11393.182 - 11443.594: 4.7105% ( 44) 00:07:51.711 11443.594 - 11494.006: 5.4228% ( 62) 00:07:51.711 11494.006 - 11544.418: 6.2845% ( 75) 00:07:51.711 11544.418 - 11594.831: 7.3070% ( 89) 00:07:51.711 11594.831 - 11645.243: 8.4444% ( 99) 00:07:51.711 11645.243 - 11695.655: 9.9609% ( 132) 00:07:51.711 11695.655 - 11746.068: 11.2822% ( 115) 00:07:51.711 11746.068 - 11796.480: 12.7183% ( 125) 00:07:51.711 11796.480 - 11846.892: 14.0970% ( 120) 00:07:51.711 11846.892 - 11897.305: 15.2114% ( 97) 00:07:51.711 11897.305 - 11947.717: 16.0156% ( 70) 00:07:51.711 11947.717 - 11998.129: 16.7394% ( 63) 00:07:51.711 11998.129 - 12048.542: 17.4288% ( 60) 00:07:51.711 12048.542 - 12098.954: 17.9573% ( 46) 00:07:51.711 12098.954 - 12149.366: 18.4168% ( 40) 00:07:51.711 12149.366 - 12199.778: 18.9108% ( 43) 00:07:51.711 12199.778 - 12250.191: 19.2325% ( 28) 00:07:51.711 12250.191 - 12300.603: 19.4853% ( 22) 00:07:51.711 12300.603 - 12351.015: 19.7955% ( 27) 00:07:51.711 12351.015 - 12401.428: 20.1402% ( 30) 00:07:51.711 12401.428 - 12451.840: 20.5882% ( 39) 00:07:51.711 12451.840 - 12502.252: 21.0018% ( 36) 00:07:51.711 12502.252 - 12552.665: 21.4959% ( 43) 00:07:51.711 12552.665 - 12603.077: 21.9669% ( 41) 00:07:51.711 12603.077 - 12653.489: 22.5528% ( 51) 00:07:51.711 12653.489 - 12703.902: 23.3226% ( 67) 00:07:51.711 12703.902 - 12754.314: 24.1039% ( 68) 00:07:51.711 12754.314 - 12804.726: 25.0115% ( 79) 00:07:51.711 12804.726 - 12855.138: 25.9191% ( 79) 00:07:51.711 12855.138 - 12905.551: 26.8153% ( 78) 00:07:51.711 12905.551 - 13006.375: 28.4352% ( 141) 00:07:51.711 13006.375 - 13107.200: 29.8943% ( 127) 00:07:51.711 13107.200 - 13208.025: 31.5028% ( 140) 00:07:51.711 13208.025 - 13308.849: 32.8355% ( 116) 00:07:51.711 13308.849 - 13409.674: 33.8350% ( 87) 00:07:51.711 13409.674 - 13510.498: 34.7312% ( 78) 00:07:51.711 13510.498 - 13611.323: 35.3631% ( 55) 00:07:51.711 13611.323 - 13712.148: 36.0064% ( 56) 00:07:51.711 13712.148 - 13812.972: 36.7877% ( 68) 00:07:51.711 13812.972 - 13913.797: 37.4655% ( 59) 00:07:51.711 13913.797 - 14014.622: 38.1089% ( 56) 00:07:51.711 14014.622 - 14115.446: 39.4876% ( 120) 00:07:51.711 14115.446 - 14216.271: 40.9007% ( 123) 00:07:51.711 14216.271 - 14317.095: 43.2215% ( 202) 00:07:51.711 14317.095 - 14417.920: 45.9214% ( 235) 00:07:51.711 14417.920 - 14518.745: 48.0124% ( 182) 00:07:51.711 14518.745 - 14619.569: 50.5630% ( 222) 00:07:51.711 14619.569 - 14720.394: 52.3667% ( 157) 00:07:51.711 14720.394 - 14821.218: 54.4347% ( 180) 00:07:51.711 14821.218 - 14922.043: 55.9283% ( 130) 00:07:51.711 14922.043 - 15022.868: 57.5253% ( 139) 00:07:51.711 15022.868 - 15123.692: 59.1797% ( 144) 00:07:51.711 15123.692 - 15224.517: 60.9490% ( 154) 00:07:51.711 15224.517 - 15325.342: 62.8676% ( 167) 00:07:51.711 15325.342 - 15426.166: 65.8203% ( 257) 00:07:51.711 15426.166 - 15526.991: 68.5547% ( 238) 00:07:51.711 15526.991 - 15627.815: 70.6112% ( 179) 00:07:51.711 15627.815 - 15728.640: 72.3805% ( 154) 00:07:51.711 15728.640 - 15829.465: 74.3911% ( 175) 00:07:51.711 15829.465 - 15930.289: 76.5510% ( 188) 00:07:51.711 15930.289 - 16031.114: 78.7454% ( 191) 00:07:51.711 16031.114 - 16131.938: 81.1696% ( 211) 00:07:51.711 16131.938 - 16232.763: 83.3065% ( 186) 00:07:51.711 16232.763 - 16333.588: 85.2137% ( 166) 00:07:51.711 16333.588 - 16434.412: 86.9141% ( 148) 00:07:51.711 16434.412 - 16535.237: 88.0400% ( 98) 00:07:51.711 16535.237 - 16636.062: 89.0395% ( 87) 00:07:51.711 16636.062 - 16736.886: 90.0046% ( 84) 00:07:51.711 16736.886 - 16837.711: 90.6595% ( 57) 00:07:51.711 16837.711 - 16938.535: 91.3488% ( 60) 00:07:51.711 16938.535 - 17039.360: 92.1530% ( 70) 00:07:51.711 17039.360 - 17140.185: 92.7849% ( 55) 00:07:51.711 17140.185 - 17241.009: 93.2560% ( 41) 00:07:51.711 17241.009 - 17341.834: 93.6006% ( 30) 00:07:51.711 17341.834 - 17442.658: 93.8189% ( 19) 00:07:51.711 17442.658 - 17543.483: 93.9683% ( 13) 00:07:51.711 17543.483 - 17644.308: 94.0717% ( 9) 00:07:51.711 17644.308 - 17745.132: 94.1062% ( 3) 00:07:51.711 17745.132 - 17845.957: 94.1291% ( 2) 00:07:51.711 17845.957 - 17946.782: 94.1751% ( 4) 00:07:51.711 17946.782 - 18047.606: 94.2555% ( 7) 00:07:51.711 18047.606 - 18148.431: 94.3130% ( 5) 00:07:51.711 18148.431 - 18249.255: 94.4049% ( 8) 00:07:51.711 18249.255 - 18350.080: 94.6576% ( 22) 00:07:51.711 18350.080 - 18450.905: 95.1402% ( 42) 00:07:51.711 18450.905 - 18551.729: 95.6457% ( 44) 00:07:51.711 18551.729 - 18652.554: 96.0708% ( 37) 00:07:51.711 18652.554 - 18753.378: 96.5763% ( 44) 00:07:51.711 18753.378 - 18854.203: 96.7256% ( 13) 00:07:51.711 18854.203 - 18955.028: 96.8865% ( 14) 00:07:51.711 18955.028 - 19055.852: 97.0014% ( 10) 00:07:51.711 19055.852 - 19156.677: 97.1048% ( 9) 00:07:51.711 19156.677 - 19257.502: 97.1622% ( 5) 00:07:51.711 19257.502 - 19358.326: 97.2886% ( 11) 00:07:51.711 19358.326 - 19459.151: 97.5528% ( 23) 00:07:51.711 19459.151 - 19559.975: 97.6907% ( 12) 00:07:51.711 19559.975 - 19660.800: 97.7367% ( 4) 00:07:51.711 19660.800 - 19761.625: 97.7597% ( 2) 00:07:51.711 19862.449 - 19963.274: 97.7941% ( 3) 00:07:51.711 19963.274 - 20064.098: 97.8401% ( 4) 00:07:51.711 20064.098 - 20164.923: 97.9090% ( 6) 00:07:51.711 20164.923 - 20265.748: 97.9894% ( 7) 00:07:51.711 20265.748 - 20366.572: 98.0699% ( 7) 00:07:51.711 20366.572 - 20467.397: 98.2537% ( 16) 00:07:51.711 20467.397 - 20568.222: 98.3686% ( 10) 00:07:51.711 20568.222 - 20669.046: 98.4375% ( 6) 00:07:51.711 20669.046 - 20769.871: 98.4835% ( 4) 00:07:51.711 20769.871 - 20870.695: 98.5294% ( 4) 00:07:51.711 26416.049 - 26617.698: 98.5524% ( 2) 00:07:51.711 26617.698 - 26819.348: 98.6213% ( 6) 00:07:51.711 26819.348 - 27020.997: 98.7132% ( 8) 00:07:51.711 27020.997 - 27222.646: 98.7937% ( 7) 00:07:51.711 27222.646 - 27424.295: 98.8741% ( 7) 00:07:51.711 27424.295 - 27625.945: 98.9660% ( 8) 00:07:51.711 27625.945 - 27827.594: 99.0464% ( 7) 00:07:51.711 27827.594 - 28029.243: 99.1268% ( 7) 00:07:51.711 28029.243 - 28230.892: 99.2073% ( 7) 00:07:51.711 28230.892 - 28432.542: 99.2647% ( 5) 00:07:51.711 36095.212 - 36296.862: 99.3107% ( 4) 00:07:51.711 36296.862 - 36498.511: 99.3796% ( 6) 00:07:51.712 36498.511 - 36700.160: 99.4600% ( 7) 00:07:51.712 36700.160 - 36901.809: 99.5404% ( 7) 00:07:51.712 36901.809 - 37103.458: 99.6209% ( 7) 00:07:51.712 37103.458 - 37305.108: 99.7128% ( 8) 00:07:51.712 37305.108 - 37506.757: 99.7932% ( 7) 00:07:51.712 37506.757 - 37708.406: 99.8736% ( 7) 00:07:51.712 37708.406 - 37910.055: 99.9540% ( 7) 00:07:51.712 37910.055 - 38111.705: 100.0000% ( 4) 00:07:51.712 00:07:51.712 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:51.712 ============================================================================== 00:07:51.712 Range in us Cumulative IO count 00:07:51.712 9578.338 - 9628.751: 0.0114% ( 1) 00:07:51.712 9628.751 - 9679.163: 0.0228% ( 1) 00:07:51.712 9679.163 - 9729.575: 0.0456% ( 2) 00:07:51.712 9729.575 - 9779.988: 0.0798% ( 3) 00:07:51.712 9779.988 - 9830.400: 0.1141% ( 3) 00:07:51.712 9830.400 - 9880.812: 0.1369% ( 2) 00:07:51.712 9880.812 - 9931.225: 0.1711% ( 3) 00:07:51.712 9931.225 - 9981.637: 0.1939% ( 2) 00:07:51.712 9981.637 - 10032.049: 0.2281% ( 3) 00:07:51.712 10032.049 - 10082.462: 0.2965% ( 6) 00:07:51.712 10082.462 - 10132.874: 0.5018% ( 18) 00:07:51.712 10132.874 - 10183.286: 0.7527% ( 22) 00:07:51.712 10183.286 - 10233.698: 0.9694% ( 19) 00:07:51.712 10233.698 - 10284.111: 1.1063% ( 12) 00:07:51.712 10284.111 - 10334.523: 1.3002% ( 17) 00:07:51.712 10334.523 - 10384.935: 1.5853% ( 25) 00:07:51.712 10384.935 - 10435.348: 1.8020% ( 19) 00:07:51.712 10435.348 - 10485.760: 1.9161% ( 10) 00:07:51.712 10485.760 - 10536.172: 1.9959% ( 7) 00:07:51.712 10536.172 - 10586.585: 2.0757% ( 7) 00:07:51.712 10586.585 - 10636.997: 2.1328% ( 5) 00:07:51.712 10636.997 - 10687.409: 2.1670% ( 3) 00:07:51.712 10687.409 - 10737.822: 2.2240% ( 5) 00:07:51.712 10737.822 - 10788.234: 2.2810% ( 5) 00:07:51.712 10788.234 - 10838.646: 2.3380% ( 5) 00:07:51.712 10838.646 - 10889.058: 2.4635% ( 11) 00:07:51.712 10889.058 - 10939.471: 2.5661% ( 9) 00:07:51.712 10939.471 - 10989.883: 2.8513% ( 25) 00:07:51.712 10989.883 - 11040.295: 3.1706% ( 28) 00:07:51.712 11040.295 - 11090.708: 3.4557% ( 25) 00:07:51.712 11090.708 - 11141.120: 3.6724% ( 19) 00:07:51.712 11141.120 - 11191.532: 3.9462% ( 24) 00:07:51.712 11191.532 - 11241.945: 4.2427% ( 26) 00:07:51.712 11241.945 - 11292.357: 4.6989% ( 40) 00:07:51.712 11292.357 - 11342.769: 5.2007% ( 44) 00:07:51.712 11342.769 - 11393.182: 6.0447% ( 74) 00:07:51.712 11393.182 - 11443.594: 6.9457% ( 79) 00:07:51.712 11443.594 - 11494.006: 7.9151% ( 85) 00:07:51.712 11494.006 - 11544.418: 8.6337% ( 63) 00:07:51.712 11544.418 - 11594.831: 9.3636% ( 64) 00:07:51.712 11594.831 - 11645.243: 10.3216% ( 84) 00:07:51.712 11645.243 - 11695.655: 10.9717% ( 57) 00:07:51.712 11695.655 - 11746.068: 11.6218% ( 57) 00:07:51.712 11746.068 - 11796.480: 12.2947% ( 59) 00:07:51.712 11796.480 - 11846.892: 12.8079% ( 45) 00:07:51.712 11846.892 - 11897.305: 13.5379% ( 64) 00:07:51.712 11897.305 - 11947.717: 14.2222% ( 60) 00:07:51.712 11947.717 - 11998.129: 14.7924% ( 50) 00:07:51.712 11998.129 - 12048.542: 15.5109% ( 63) 00:07:51.712 12048.542 - 12098.954: 16.2409% ( 64) 00:07:51.712 12098.954 - 12149.366: 16.6629% ( 37) 00:07:51.712 12149.366 - 12199.778: 17.0164% ( 31) 00:07:51.712 12199.778 - 12250.191: 17.6323% ( 54) 00:07:51.712 12250.191 - 12300.603: 18.3052% ( 59) 00:07:51.712 12300.603 - 12351.015: 18.8755% ( 50) 00:07:51.712 12351.015 - 12401.428: 19.6510% ( 68) 00:07:51.712 12401.428 - 12451.840: 20.4608% ( 71) 00:07:51.712 12451.840 - 12502.252: 21.2249% ( 67) 00:07:51.712 12502.252 - 12552.665: 22.0803% ( 75) 00:07:51.712 12552.665 - 12603.077: 22.8558% ( 68) 00:07:51.712 12603.077 - 12653.489: 23.5858% ( 64) 00:07:51.712 12653.489 - 12703.902: 24.2016% ( 54) 00:07:51.712 12703.902 - 12754.314: 24.7263% ( 46) 00:07:51.712 12754.314 - 12804.726: 25.3422% ( 54) 00:07:51.712 12804.726 - 12855.138: 26.0949% ( 66) 00:07:51.712 12855.138 - 12905.551: 26.5853% ( 43) 00:07:51.712 12905.551 - 13006.375: 27.6004% ( 89) 00:07:51.712 13006.375 - 13107.200: 28.5698% ( 85) 00:07:51.712 13107.200 - 13208.025: 29.4594% ( 78) 00:07:51.712 13208.025 - 13308.849: 30.4973% ( 91) 00:07:51.712 13308.849 - 13409.674: 32.0598% ( 137) 00:07:51.712 13409.674 - 13510.498: 33.4056% ( 118) 00:07:51.712 13510.498 - 13611.323: 34.6031% ( 105) 00:07:51.712 13611.323 - 13712.148: 36.1428% ( 135) 00:07:51.712 13712.148 - 13812.972: 37.5570% ( 124) 00:07:51.712 13812.972 - 13913.797: 39.3362% ( 156) 00:07:51.712 13913.797 - 14014.622: 40.3285% ( 87) 00:07:51.712 14014.622 - 14115.446: 41.4005% ( 94) 00:07:51.712 14115.446 - 14216.271: 42.8034% ( 123) 00:07:51.712 14216.271 - 14317.095: 43.9553% ( 101) 00:07:51.712 14317.095 - 14417.920: 45.5862% ( 143) 00:07:51.712 14417.920 - 14518.745: 47.4110% ( 160) 00:07:51.712 14518.745 - 14619.569: 48.7682% ( 119) 00:07:51.712 14619.569 - 14720.394: 50.2965% ( 134) 00:07:51.712 14720.394 - 14821.218: 52.6346% ( 205) 00:07:51.712 14821.218 - 14922.043: 54.3339% ( 149) 00:07:51.712 14922.043 - 15022.868: 55.9421% ( 141) 00:07:51.712 15022.868 - 15123.692: 58.1661% ( 195) 00:07:51.712 15123.692 - 15224.517: 60.6524% ( 218) 00:07:51.712 15224.517 - 15325.342: 62.9904% ( 205) 00:07:51.712 15325.342 - 15426.166: 65.5794% ( 227) 00:07:51.712 15426.166 - 15526.991: 67.8832% ( 202) 00:07:51.712 15526.991 - 15627.815: 70.6775% ( 245) 00:07:51.712 15627.815 - 15728.640: 73.1980% ( 221) 00:07:51.712 15728.640 - 15829.465: 75.8326% ( 231) 00:07:51.712 15829.465 - 15930.289: 78.2619% ( 213) 00:07:51.712 15930.289 - 16031.114: 80.5315% ( 199) 00:07:51.712 16031.114 - 16131.938: 82.8581% ( 204) 00:07:51.712 16131.938 - 16232.763: 84.7742% ( 168) 00:07:51.712 16232.763 - 16333.588: 86.0630% ( 113) 00:07:51.712 16333.588 - 16434.412: 87.1464% ( 95) 00:07:51.712 16434.412 - 16535.237: 88.2984% ( 101) 00:07:51.712 16535.237 - 16636.062: 89.3590% ( 93) 00:07:51.712 16636.062 - 16736.886: 90.5109% ( 101) 00:07:51.712 16736.886 - 16837.711: 91.6172% ( 97) 00:07:51.712 16837.711 - 16938.535: 92.2103% ( 52) 00:07:51.712 16938.535 - 17039.360: 92.5068% ( 26) 00:07:51.712 17039.360 - 17140.185: 92.6437% ( 12) 00:07:51.712 17140.185 - 17241.009: 92.8262% ( 16) 00:07:51.712 17241.009 - 17341.834: 92.9516% ( 11) 00:07:51.712 17341.834 - 17442.658: 93.2368% ( 25) 00:07:51.712 17442.658 - 17543.483: 93.4877% ( 22) 00:07:51.712 17543.483 - 17644.308: 93.7842% ( 26) 00:07:51.712 17644.308 - 17745.132: 94.0123% ( 20) 00:07:51.712 17745.132 - 17845.957: 94.2404% ( 20) 00:07:51.712 17845.957 - 17946.782: 94.5598% ( 28) 00:07:51.712 17946.782 - 18047.606: 94.9589% ( 35) 00:07:51.712 18047.606 - 18148.431: 95.1984% ( 21) 00:07:51.712 18148.431 - 18249.255: 95.3923% ( 17) 00:07:51.712 18249.255 - 18350.080: 95.5064% ( 10) 00:07:51.712 18350.080 - 18450.905: 95.6775% ( 15) 00:07:51.712 18450.905 - 18551.729: 96.1337% ( 40) 00:07:51.712 18551.729 - 18652.554: 96.6697% ( 47) 00:07:51.712 18652.554 - 18753.378: 97.0803% ( 36) 00:07:51.712 18753.378 - 18854.203: 97.3540% ( 24) 00:07:51.712 18854.203 - 18955.028: 97.5707% ( 19) 00:07:51.712 18955.028 - 19055.852: 97.6620% ( 8) 00:07:51.712 19055.852 - 19156.677: 97.6962% ( 3) 00:07:51.712 19156.677 - 19257.502: 97.7418% ( 4) 00:07:51.712 19257.502 - 19358.326: 97.7874% ( 4) 00:07:51.712 19358.326 - 19459.151: 97.8102% ( 2) 00:07:51.712 19761.625 - 19862.449: 97.8216% ( 1) 00:07:51.712 19862.449 - 19963.274: 97.8786% ( 5) 00:07:51.712 19963.274 - 20064.098: 97.9927% ( 10) 00:07:51.712 20064.098 - 20164.923: 98.1524% ( 14) 00:07:51.712 20164.923 - 20265.748: 98.4831% ( 29) 00:07:51.712 20265.748 - 20366.572: 98.7226% ( 21) 00:07:51.712 20366.572 - 20467.397: 98.8595% ( 12) 00:07:51.712 20467.397 - 20568.222: 98.9279% ( 6) 00:07:51.712 20568.222 - 20669.046: 98.9849% ( 5) 00:07:51.712 20669.046 - 20769.871: 99.0306% ( 4) 00:07:51.712 20769.871 - 20870.695: 99.0990% ( 6) 00:07:51.712 20870.695 - 20971.520: 99.1560% ( 5) 00:07:51.712 20971.520 - 21072.345: 99.2130% ( 5) 00:07:51.712 21072.345 - 21173.169: 99.2587% ( 4) 00:07:51.712 21173.169 - 21273.994: 99.2701% ( 1) 00:07:51.712 26617.698 - 26819.348: 99.3157% ( 4) 00:07:51.712 26819.348 - 27020.997: 99.3955% ( 7) 00:07:51.712 27020.997 - 27222.646: 99.4754% ( 7) 00:07:51.712 27222.646 - 27424.295: 99.5666% ( 8) 00:07:51.712 27424.295 - 27625.945: 99.6464% ( 7) 00:07:51.712 27625.945 - 27827.594: 99.7263% ( 7) 00:07:51.712 27827.594 - 28029.243: 99.8061% ( 7) 00:07:51.712 28029.243 - 28230.892: 99.8859% ( 7) 00:07:51.712 28230.892 - 28432.542: 99.9772% ( 8) 00:07:51.712 28432.542 - 28634.191: 100.0000% ( 2) 00:07:51.712 00:07:51.712 16:54:26 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:51.712 00:07:51.712 real 0m2.558s 00:07:51.712 user 0m2.208s 00:07:51.712 sys 0m0.229s 00:07:51.712 16:54:26 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:51.712 ************************************ 00:07:51.712 END TEST nvme_perf 00:07:51.712 ************************************ 00:07:51.712 16:54:26 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:51.974 16:54:26 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:51.974 16:54:26 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:51.974 16:54:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.974 16:54:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:51.974 ************************************ 00:07:51.974 START TEST nvme_hello_world 00:07:51.974 ************************************ 00:07:51.974 16:54:26 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:51.974 Initializing NVMe Controllers 00:07:51.974 Attached to 0000:00:13.0 00:07:51.974 Namespace ID: 1 size: 1GB 00:07:51.974 Attached to 0000:00:10.0 00:07:51.974 Namespace ID: 1 size: 6GB 00:07:51.974 Attached to 0000:00:11.0 00:07:51.974 Namespace ID: 1 size: 5GB 00:07:51.974 Attached to 0000:00:12.0 00:07:51.974 Namespace ID: 1 size: 4GB 00:07:51.974 Namespace ID: 2 size: 4GB 00:07:51.974 Namespace ID: 3 size: 4GB 00:07:51.974 Initialization complete. 00:07:51.974 INFO: using host memory buffer for IO 00:07:51.974 Hello world! 00:07:51.974 INFO: using host memory buffer for IO 00:07:51.974 Hello world! 00:07:51.974 INFO: using host memory buffer for IO 00:07:51.974 Hello world! 00:07:51.974 INFO: using host memory buffer for IO 00:07:51.974 Hello world! 00:07:51.974 INFO: using host memory buffer for IO 00:07:51.974 Hello world! 00:07:51.974 INFO: using host memory buffer for IO 00:07:51.974 Hello world! 00:07:52.235 00:07:52.235 real 0m0.251s 00:07:52.235 user 0m0.086s 00:07:52.235 sys 0m0.104s 00:07:52.235 16:54:26 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.235 16:54:26 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:52.235 ************************************ 00:07:52.235 END TEST nvme_hello_world 00:07:52.235 ************************************ 00:07:52.235 16:54:26 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:52.235 16:54:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.235 16:54:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.235 16:54:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.235 ************************************ 00:07:52.235 START TEST nvme_sgl 00:07:52.235 ************************************ 00:07:52.235 16:54:26 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:52.235 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:52.235 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:52.235 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:52.235 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:52.235 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:52.235 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:52.235 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:52.235 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:52.235 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:52.235 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:52.235 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:52.235 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:52.235 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:52.235 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:52.235 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:52.235 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:52.235 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:52.235 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:52.235 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:52.235 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:52.496 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:52.496 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:52.496 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:52.496 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:52.496 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:52.496 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:52.496 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:52.496 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:52.496 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:52.496 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:52.496 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:52.496 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:52.496 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:52.496 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:52.496 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:52.496 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:52.496 NVMe Readv/Writev Request test 00:07:52.496 Attached to 0000:00:13.0 00:07:52.496 Attached to 0000:00:10.0 00:07:52.496 Attached to 0000:00:11.0 00:07:52.496 Attached to 0000:00:12.0 00:07:52.496 0000:00:10.0: build_io_request_2 test passed 00:07:52.496 0000:00:10.0: build_io_request_4 test passed 00:07:52.496 0000:00:10.0: build_io_request_5 test passed 00:07:52.496 0000:00:10.0: build_io_request_6 test passed 00:07:52.496 0000:00:10.0: build_io_request_7 test passed 00:07:52.496 0000:00:10.0: build_io_request_10 test passed 00:07:52.496 0000:00:11.0: build_io_request_2 test passed 00:07:52.496 0000:00:11.0: build_io_request_4 test passed 00:07:52.496 0000:00:11.0: build_io_request_5 test passed 00:07:52.496 0000:00:11.0: build_io_request_6 test passed 00:07:52.496 0000:00:11.0: build_io_request_7 test passed 00:07:52.496 0000:00:11.0: build_io_request_10 test passed 00:07:52.496 Cleaning up... 00:07:52.496 00:07:52.496 real 0m0.278s 00:07:52.496 user 0m0.147s 00:07:52.496 sys 0m0.088s 00:07:52.496 16:54:26 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.496 16:54:26 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:52.496 ************************************ 00:07:52.496 END TEST nvme_sgl 00:07:52.496 ************************************ 00:07:52.496 16:54:26 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:52.496 16:54:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.496 16:54:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.496 16:54:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.496 ************************************ 00:07:52.496 START TEST nvme_e2edp 00:07:52.496 ************************************ 00:07:52.496 16:54:26 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:52.757 NVMe Write/Read with End-to-End data protection test 00:07:52.757 Attached to 0000:00:13.0 00:07:52.757 Attached to 0000:00:10.0 00:07:52.757 Attached to 0000:00:11.0 00:07:52.757 Attached to 0000:00:12.0 00:07:52.757 Cleaning up... 00:07:52.757 00:07:52.757 real 0m0.216s 00:07:52.757 user 0m0.072s 00:07:52.757 sys 0m0.097s 00:07:52.757 16:54:26 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.757 16:54:26 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:52.757 ************************************ 00:07:52.757 END TEST nvme_e2edp 00:07:52.757 ************************************ 00:07:52.757 16:54:26 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:52.757 16:54:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.757 16:54:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.757 16:54:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:52.757 ************************************ 00:07:52.757 START TEST nvme_reserve 00:07:52.757 ************************************ 00:07:52.757 16:54:26 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:53.019 ===================================================== 00:07:53.019 NVMe Controller at PCI bus 0, device 19, function 0 00:07:53.019 ===================================================== 00:07:53.019 Reservations: Not Supported 00:07:53.019 ===================================================== 00:07:53.019 NVMe Controller at PCI bus 0, device 16, function 0 00:07:53.019 ===================================================== 00:07:53.019 Reservations: Not Supported 00:07:53.019 ===================================================== 00:07:53.019 NVMe Controller at PCI bus 0, device 17, function 0 00:07:53.019 ===================================================== 00:07:53.019 Reservations: Not Supported 00:07:53.019 ===================================================== 00:07:53.019 NVMe Controller at PCI bus 0, device 18, function 0 00:07:53.019 ===================================================== 00:07:53.019 Reservations: Not Supported 00:07:53.019 Reservation test passed 00:07:53.019 00:07:53.019 real 0m0.215s 00:07:53.019 user 0m0.078s 00:07:53.019 sys 0m0.093s 00:07:53.019 16:54:27 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.019 16:54:27 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:53.019 ************************************ 00:07:53.019 END TEST nvme_reserve 00:07:53.019 ************************************ 00:07:53.019 16:54:27 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:53.019 16:54:27 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:53.019 16:54:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.019 16:54:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:53.019 ************************************ 00:07:53.019 START TEST nvme_err_injection 00:07:53.019 ************************************ 00:07:53.019 16:54:27 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:53.280 NVMe Error Injection test 00:07:53.280 Attached to 0000:00:13.0 00:07:53.280 Attached to 0000:00:10.0 00:07:53.280 Attached to 0000:00:11.0 00:07:53.280 Attached to 0000:00:12.0 00:07:53.280 0000:00:10.0: get features failed as expected 00:07:53.280 0000:00:11.0: get features failed as expected 00:07:53.280 0000:00:12.0: get features failed as expected 00:07:53.280 0000:00:13.0: get features failed as expected 00:07:53.280 0000:00:13.0: get features successfully as expected 00:07:53.280 0000:00:10.0: get features successfully as expected 00:07:53.280 0000:00:11.0: get features successfully as expected 00:07:53.280 0000:00:12.0: get features successfully as expected 00:07:53.280 0000:00:13.0: read failed as expected 00:07:53.280 0000:00:10.0: read failed as expected 00:07:53.280 0000:00:11.0: read failed as expected 00:07:53.280 0000:00:12.0: read failed as expected 00:07:53.280 0000:00:13.0: read successfully as expected 00:07:53.280 0000:00:10.0: read successfully as expected 00:07:53.280 0000:00:11.0: read successfully as expected 00:07:53.280 0000:00:12.0: read successfully as expected 00:07:53.280 Cleaning up... 00:07:53.280 00:07:53.280 real 0m0.219s 00:07:53.280 user 0m0.075s 00:07:53.280 sys 0m0.100s 00:07:53.280 16:54:27 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.280 ************************************ 00:07:53.280 END TEST nvme_err_injection 00:07:53.280 ************************************ 00:07:53.280 16:54:27 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:53.280 16:54:27 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:53.280 16:54:27 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:53.280 16:54:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.280 16:54:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:53.280 ************************************ 00:07:53.280 START TEST nvme_overhead 00:07:53.280 ************************************ 00:07:53.280 16:54:27 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:54.664 Initializing NVMe Controllers 00:07:54.664 Attached to 0000:00:13.0 00:07:54.664 Attached to 0000:00:10.0 00:07:54.664 Attached to 0000:00:11.0 00:07:54.664 Attached to 0000:00:12.0 00:07:54.664 Initialization complete. Launching workers. 00:07:54.664 submit (in ns) avg, min, max = 12583.5, 10778.5, 87306.2 00:07:54.664 complete (in ns) avg, min, max = 8538.2, 7331.5, 1371312.3 00:07:54.664 00:07:54.664 Submit histogram 00:07:54.664 ================ 00:07:54.664 Range in us Cumulative Count 00:07:54.664 10.732 - 10.782: 0.0145% ( 1) 00:07:54.664 10.978 - 11.028: 0.0289% ( 1) 00:07:54.664 11.028 - 11.077: 0.0434% ( 1) 00:07:54.664 11.225 - 11.274: 0.1302% ( 6) 00:07:54.665 11.274 - 11.323: 0.2748% ( 10) 00:07:54.665 11.323 - 11.372: 0.8388% ( 39) 00:07:54.665 11.372 - 11.422: 2.4150% ( 109) 00:07:54.665 11.422 - 11.471: 6.4787% ( 281) 00:07:54.665 11.471 - 11.520: 13.3189% ( 473) 00:07:54.665 11.520 - 11.569: 21.4317% ( 561) 00:07:54.665 11.569 - 11.618: 27.9393% ( 450) 00:07:54.665 11.618 - 11.668: 32.2632% ( 299) 00:07:54.665 11.668 - 11.717: 34.8373% ( 178) 00:07:54.665 11.717 - 11.766: 36.4859% ( 114) 00:07:54.665 11.766 - 11.815: 37.6717% ( 82) 00:07:54.665 11.815 - 11.865: 38.9299% ( 87) 00:07:54.665 11.865 - 11.914: 40.1446% ( 84) 00:07:54.665 11.914 - 11.963: 41.6631% ( 105) 00:07:54.665 11.963 - 12.012: 43.6009% ( 134) 00:07:54.665 12.012 - 12.062: 45.9725% ( 164) 00:07:54.665 12.062 - 12.111: 49.7035% ( 258) 00:07:54.665 12.111 - 12.160: 54.5625% ( 336) 00:07:54.665 12.160 - 12.209: 59.7108% ( 356) 00:07:54.665 12.209 - 12.258: 65.4519% ( 397) 00:07:54.665 12.258 - 12.308: 70.0506% ( 318) 00:07:54.665 12.308 - 12.357: 73.9696% ( 271) 00:07:54.665 12.357 - 12.406: 76.8619% ( 200) 00:07:54.665 12.406 - 12.455: 79.4649% ( 180) 00:07:54.665 12.455 - 12.505: 81.2871% ( 126) 00:07:54.665 12.505 - 12.554: 82.9501% ( 115) 00:07:54.665 12.554 - 12.603: 84.2661% ( 91) 00:07:54.665 12.603 - 12.702: 85.7845% ( 105) 00:07:54.665 12.702 - 12.800: 86.6522% ( 60) 00:07:54.665 12.800 - 12.898: 87.3319% ( 47) 00:07:54.665 12.898 - 12.997: 87.9393% ( 42) 00:07:54.665 12.997 - 13.095: 88.6623% ( 50) 00:07:54.665 13.095 - 13.194: 89.3854% ( 50) 00:07:54.665 13.194 - 13.292: 89.8771% ( 34) 00:07:54.665 13.292 - 13.391: 90.4411% ( 39) 00:07:54.665 13.391 - 13.489: 90.8894% ( 31) 00:07:54.665 13.489 - 13.588: 91.3377% ( 31) 00:07:54.665 13.588 - 13.686: 91.5401% ( 14) 00:07:54.665 13.686 - 13.785: 91.6847% ( 10) 00:07:54.665 13.785 - 13.883: 91.9017% ( 15) 00:07:54.665 13.883 - 13.982: 92.0318% ( 9) 00:07:54.665 13.982 - 14.080: 92.1041% ( 5) 00:07:54.665 14.080 - 14.178: 92.2054% ( 7) 00:07:54.665 14.178 - 14.277: 92.2343% ( 2) 00:07:54.665 14.277 - 14.375: 92.3355% ( 7) 00:07:54.665 14.375 - 14.474: 92.3644% ( 2) 00:07:54.665 14.474 - 14.572: 92.4512% ( 6) 00:07:54.665 14.572 - 14.671: 92.5669% ( 8) 00:07:54.665 14.671 - 14.769: 92.7549% ( 13) 00:07:54.665 14.769 - 14.868: 92.9140% ( 11) 00:07:54.665 14.868 - 14.966: 93.1309% ( 15) 00:07:54.665 14.966 - 15.065: 93.1887% ( 4) 00:07:54.665 15.065 - 15.163: 93.2899% ( 7) 00:07:54.665 15.163 - 15.262: 93.3623% ( 5) 00:07:54.665 15.262 - 15.360: 93.4490% ( 6) 00:07:54.665 15.360 - 15.458: 93.4924% ( 3) 00:07:54.665 15.458 - 15.557: 93.5358% ( 3) 00:07:54.665 15.557 - 15.655: 93.5647% ( 2) 00:07:54.665 15.655 - 15.754: 93.6226% ( 4) 00:07:54.665 15.754 - 15.852: 93.7383% ( 8) 00:07:54.665 15.852 - 15.951: 93.7961% ( 4) 00:07:54.665 15.951 - 16.049: 93.8106% ( 1) 00:07:54.665 16.049 - 16.148: 93.8395% ( 2) 00:07:54.665 16.246 - 16.345: 93.8973% ( 4) 00:07:54.665 16.345 - 16.443: 93.9696% ( 5) 00:07:54.665 16.443 - 16.542: 94.0275% ( 4) 00:07:54.665 16.640 - 16.738: 94.0419% ( 1) 00:07:54.665 16.738 - 16.837: 94.0853% ( 3) 00:07:54.665 16.837 - 16.935: 94.1432% ( 4) 00:07:54.665 16.935 - 17.034: 94.2299% ( 6) 00:07:54.665 17.034 - 17.132: 94.2733% ( 3) 00:07:54.665 17.132 - 17.231: 94.3167% ( 3) 00:07:54.665 17.231 - 17.329: 94.4469% ( 9) 00:07:54.665 17.329 - 17.428: 94.5915% ( 10) 00:07:54.665 17.428 - 17.526: 94.7216% ( 9) 00:07:54.665 17.526 - 17.625: 94.9241% ( 14) 00:07:54.665 17.625 - 17.723: 95.0832% ( 11) 00:07:54.665 17.723 - 17.822: 95.1988% ( 8) 00:07:54.665 17.822 - 17.920: 95.3001% ( 7) 00:07:54.665 17.920 - 18.018: 95.3435% ( 3) 00:07:54.665 18.018 - 18.117: 95.4158% ( 5) 00:07:54.665 18.117 - 18.215: 95.4736% ( 4) 00:07:54.665 18.215 - 18.314: 95.6761% ( 14) 00:07:54.665 18.314 - 18.412: 95.8496% ( 12) 00:07:54.665 18.412 - 18.511: 95.9074% ( 4) 00:07:54.665 18.511 - 18.609: 96.0954% ( 13) 00:07:54.665 18.609 - 18.708: 96.3268% ( 16) 00:07:54.665 18.708 - 18.806: 96.5148% ( 13) 00:07:54.665 18.806 - 18.905: 96.6884% ( 12) 00:07:54.665 18.905 - 19.003: 96.9342% ( 17) 00:07:54.665 19.003 - 19.102: 97.1511% ( 15) 00:07:54.665 19.102 - 19.200: 97.3247% ( 12) 00:07:54.665 19.200 - 19.298: 97.3825% ( 4) 00:07:54.665 19.298 - 19.397: 97.5271% ( 10) 00:07:54.665 19.397 - 19.495: 97.6717% ( 10) 00:07:54.665 19.495 - 19.594: 97.8453% ( 12) 00:07:54.665 19.594 - 19.692: 98.0333% ( 13) 00:07:54.665 19.692 - 19.791: 98.1923% ( 11) 00:07:54.665 19.791 - 19.889: 98.2791% ( 6) 00:07:54.665 19.889 - 19.988: 98.3659% ( 6) 00:07:54.665 19.988 - 20.086: 98.4671% ( 7) 00:07:54.665 20.086 - 20.185: 98.5249% ( 4) 00:07:54.665 20.185 - 20.283: 98.5683% ( 3) 00:07:54.665 20.283 - 20.382: 98.5973% ( 2) 00:07:54.665 20.382 - 20.480: 98.6696% ( 5) 00:07:54.665 20.480 - 20.578: 98.7129% ( 3) 00:07:54.665 20.578 - 20.677: 98.7708% ( 4) 00:07:54.665 20.677 - 20.775: 98.8286% ( 4) 00:07:54.665 20.874 - 20.972: 98.8431% ( 1) 00:07:54.665 20.972 - 21.071: 98.8865% ( 3) 00:07:54.665 21.169 - 21.268: 98.9299% ( 3) 00:07:54.665 21.268 - 21.366: 98.9877% ( 4) 00:07:54.665 21.366 - 21.465: 99.0166% ( 2) 00:07:54.665 21.465 - 21.563: 99.0456% ( 2) 00:07:54.665 21.563 - 21.662: 99.0745% ( 2) 00:07:54.665 21.662 - 21.760: 99.1468% ( 5) 00:07:54.665 21.760 - 21.858: 99.1902% ( 3) 00:07:54.665 21.858 - 21.957: 99.3203% ( 9) 00:07:54.665 22.055 - 22.154: 99.3637% ( 3) 00:07:54.665 22.154 - 22.252: 99.3926% ( 2) 00:07:54.665 22.252 - 22.351: 99.4215% ( 2) 00:07:54.665 22.351 - 22.449: 99.4360% ( 1) 00:07:54.665 22.449 - 22.548: 99.4505% ( 1) 00:07:54.665 22.548 - 22.646: 99.4649% ( 1) 00:07:54.665 22.745 - 22.843: 99.5228% ( 4) 00:07:54.665 22.843 - 22.942: 99.5372% ( 1) 00:07:54.665 23.040 - 23.138: 99.5517% ( 1) 00:07:54.665 23.335 - 23.434: 99.5951% ( 3) 00:07:54.665 23.532 - 23.631: 99.6095% ( 1) 00:07:54.665 23.631 - 23.729: 99.6240% ( 1) 00:07:54.665 23.926 - 24.025: 99.6529% ( 2) 00:07:54.665 24.418 - 24.517: 99.6674% ( 1) 00:07:54.665 26.191 - 26.388: 99.6819% ( 1) 00:07:54.665 27.372 - 27.569: 99.6963% ( 1) 00:07:54.665 28.357 - 28.554: 99.7108% ( 1) 00:07:54.665 29.145 - 29.342: 99.7252% ( 1) 00:07:54.665 29.342 - 29.538: 99.7397% ( 1) 00:07:54.665 30.129 - 30.326: 99.7686% ( 2) 00:07:54.665 30.523 - 30.720: 99.7831% ( 1) 00:07:54.665 32.098 - 32.295: 99.7975% ( 1) 00:07:54.665 33.674 - 33.871: 99.8120% ( 1) 00:07:54.665 34.462 - 34.658: 99.8265% ( 1) 00:07:54.665 35.840 - 36.037: 99.8409% ( 1) 00:07:54.665 36.234 - 36.431: 99.8554% ( 1) 00:07:54.665 36.431 - 36.628: 99.8698% ( 1) 00:07:54.665 37.415 - 37.612: 99.8843% ( 1) 00:07:54.665 40.172 - 40.369: 99.8988% ( 1) 00:07:54.665 40.566 - 40.763: 99.9132% ( 1) 00:07:54.665 40.763 - 40.960: 99.9277% ( 1) 00:07:54.665 61.834 - 62.228: 99.9422% ( 1) 00:07:54.665 62.622 - 63.015: 99.9566% ( 1) 00:07:54.666 63.803 - 64.197: 99.9855% ( 2) 00:07:54.666 87.040 - 87.434: 100.0000% ( 1) 00:07:54.666 00:07:54.666 Complete histogram 00:07:54.666 ================== 00:07:54.666 Range in us Cumulative Count 00:07:54.666 7.286 - 7.335: 0.0145% ( 1) 00:07:54.666 7.335 - 7.385: 0.3760% ( 25) 00:07:54.666 7.385 - 7.434: 1.9812% ( 111) 00:07:54.666 7.434 - 7.483: 5.6544% ( 254) 00:07:54.666 7.483 - 7.532: 10.1663% ( 312) 00:07:54.666 7.532 - 7.582: 14.4179% ( 294) 00:07:54.666 7.582 - 7.631: 17.3825% ( 205) 00:07:54.666 7.631 - 7.680: 19.6240% ( 155) 00:07:54.666 7.680 - 7.729: 21.0991% ( 102) 00:07:54.666 7.729 - 7.778: 22.0969% ( 69) 00:07:54.666 7.778 - 7.828: 22.8778% ( 54) 00:07:54.666 7.828 - 7.877: 24.5987% ( 119) 00:07:54.666 7.877 - 7.926: 29.3565% ( 329) 00:07:54.666 7.926 - 7.975: 37.7296% ( 579) 00:07:54.666 7.975 - 8.025: 48.1562% ( 721) 00:07:54.666 8.025 - 8.074: 59.7397% ( 801) 00:07:54.666 8.074 - 8.123: 68.7491% ( 623) 00:07:54.666 8.123 - 8.172: 76.2545% ( 519) 00:07:54.666 8.172 - 8.222: 82.2849% ( 417) 00:07:54.666 8.222 - 8.271: 87.0571% ( 330) 00:07:54.666 8.271 - 8.320: 90.1229% ( 212) 00:07:54.666 8.320 - 8.369: 92.2777% ( 149) 00:07:54.666 8.369 - 8.418: 93.8684% ( 110) 00:07:54.666 8.418 - 8.468: 94.6782% ( 56) 00:07:54.666 8.468 - 8.517: 95.3145% ( 44) 00:07:54.666 8.517 - 8.566: 95.7918% ( 33) 00:07:54.666 8.566 - 8.615: 96.0954% ( 21) 00:07:54.666 8.615 - 8.665: 96.3124% ( 15) 00:07:54.666 8.665 - 8.714: 96.5004% ( 13) 00:07:54.666 8.714 - 8.763: 96.6016% ( 7) 00:07:54.666 8.763 - 8.812: 96.6305% ( 2) 00:07:54.666 8.812 - 8.862: 96.6594% ( 2) 00:07:54.666 8.862 - 8.911: 96.6884% ( 2) 00:07:54.666 8.911 - 8.960: 96.7028% ( 1) 00:07:54.666 8.960 - 9.009: 96.7607% ( 4) 00:07:54.666 9.009 - 9.058: 96.7896% ( 2) 00:07:54.666 9.058 - 9.108: 96.8040% ( 1) 00:07:54.666 9.108 - 9.157: 96.8185% ( 1) 00:07:54.666 9.157 - 9.206: 96.8330% ( 1) 00:07:54.666 9.206 - 9.255: 96.8474% ( 1) 00:07:54.666 9.255 - 9.305: 96.8619% ( 1) 00:07:54.666 9.305 - 9.354: 96.9053% ( 3) 00:07:54.666 9.403 - 9.452: 96.9197% ( 1) 00:07:54.666 9.452 - 9.502: 96.9342% ( 1) 00:07:54.666 9.502 - 9.551: 96.9487% ( 1) 00:07:54.666 9.600 - 9.649: 96.9776% ( 2) 00:07:54.666 9.649 - 9.698: 97.0065% ( 2) 00:07:54.666 9.698 - 9.748: 97.0210% ( 1) 00:07:54.666 9.748 - 9.797: 97.0354% ( 1) 00:07:54.666 9.797 - 9.846: 97.0788% ( 3) 00:07:54.666 9.846 - 9.895: 97.2523% ( 12) 00:07:54.666 9.895 - 9.945: 97.3247% ( 5) 00:07:54.666 9.945 - 9.994: 97.3825% ( 4) 00:07:54.666 9.994 - 10.043: 97.4259% ( 3) 00:07:54.666 10.043 - 10.092: 97.5127% ( 6) 00:07:54.666 10.142 - 10.191: 97.5271% ( 1) 00:07:54.666 10.191 - 10.240: 97.5560% ( 2) 00:07:54.666 10.240 - 10.289: 97.5705% ( 1) 00:07:54.666 10.289 - 10.338: 97.5850% ( 1) 00:07:54.666 10.634 - 10.683: 97.6139% ( 2) 00:07:54.666 11.028 - 11.077: 97.6283% ( 1) 00:07:54.666 11.077 - 11.126: 97.6428% ( 1) 00:07:54.666 11.225 - 11.274: 97.6573% ( 1) 00:07:54.666 11.471 - 11.520: 97.6717% ( 1) 00:07:54.666 11.618 - 11.668: 97.6862% ( 1) 00:07:54.666 11.914 - 11.963: 97.7007% ( 1) 00:07:54.666 12.111 - 12.160: 97.7296% ( 2) 00:07:54.666 12.258 - 12.308: 97.7440% ( 1) 00:07:54.666 12.308 - 12.357: 97.7585% ( 1) 00:07:54.666 12.406 - 12.455: 97.7874% ( 2) 00:07:54.666 12.505 - 12.554: 97.8163% ( 2) 00:07:54.666 12.603 - 12.702: 97.8308% ( 1) 00:07:54.666 12.800 - 12.898: 97.8453% ( 1) 00:07:54.666 13.292 - 13.391: 97.8597% ( 1) 00:07:54.666 13.391 - 13.489: 97.8742% ( 1) 00:07:54.666 13.489 - 13.588: 97.9320% ( 4) 00:07:54.666 13.588 - 13.686: 97.9754% ( 3) 00:07:54.666 13.686 - 13.785: 98.0477% ( 5) 00:07:54.666 13.785 - 13.883: 98.1200% ( 5) 00:07:54.666 13.883 - 13.982: 98.1634% ( 3) 00:07:54.666 13.982 - 14.080: 98.2502% ( 6) 00:07:54.666 14.080 - 14.178: 98.3514% ( 7) 00:07:54.666 14.178 - 14.277: 98.4382% ( 6) 00:07:54.666 14.277 - 14.375: 98.5105% ( 5) 00:07:54.666 14.375 - 14.474: 98.6117% ( 7) 00:07:54.666 14.474 - 14.572: 98.6840% ( 5) 00:07:54.666 14.572 - 14.671: 98.7563% ( 5) 00:07:54.666 14.671 - 14.769: 98.8142% ( 4) 00:07:54.666 14.769 - 14.868: 98.8431% ( 2) 00:07:54.666 14.868 - 14.966: 98.8865% ( 3) 00:07:54.666 14.966 - 15.065: 98.9443% ( 4) 00:07:54.666 15.065 - 15.163: 99.0166% ( 5) 00:07:54.666 15.163 - 15.262: 99.1612% ( 10) 00:07:54.666 15.262 - 15.360: 99.2625% ( 7) 00:07:54.666 15.360 - 15.458: 99.3348% ( 5) 00:07:54.666 15.458 - 15.557: 99.4505% ( 8) 00:07:54.666 15.557 - 15.655: 99.5372% ( 6) 00:07:54.666 15.655 - 15.754: 99.5517% ( 1) 00:07:54.666 15.754 - 15.852: 99.5806% ( 2) 00:07:54.666 16.049 - 16.148: 99.5951% ( 1) 00:07:54.666 16.542 - 16.640: 99.6095% ( 1) 00:07:54.666 16.837 - 16.935: 99.6240% ( 1) 00:07:54.666 17.132 - 17.231: 99.6385% ( 1) 00:07:54.666 17.329 - 17.428: 99.6529% ( 1) 00:07:54.666 17.428 - 17.526: 99.6819% ( 2) 00:07:54.666 17.723 - 17.822: 99.6963% ( 1) 00:07:54.666 18.314 - 18.412: 99.7108% ( 1) 00:07:54.666 18.806 - 18.905: 99.7252% ( 1) 00:07:54.666 19.692 - 19.791: 99.7397% ( 1) 00:07:54.666 20.086 - 20.185: 99.7542% ( 1) 00:07:54.666 21.366 - 21.465: 99.7686% ( 1) 00:07:54.666 21.465 - 21.563: 99.7831% ( 1) 00:07:54.666 21.662 - 21.760: 99.7975% ( 1) 00:07:54.666 22.252 - 22.351: 99.8120% ( 1) 00:07:54.666 22.745 - 22.843: 99.8265% ( 1) 00:07:54.666 29.932 - 30.129: 99.8409% ( 1) 00:07:54.666 35.446 - 35.643: 99.8554% ( 1) 00:07:54.666 38.203 - 38.400: 99.8698% ( 1) 00:07:54.666 46.671 - 46.868: 99.8843% ( 1) 00:07:54.666 50.215 - 50.412: 99.8988% ( 1) 00:07:54.666 60.258 - 60.652: 99.9132% ( 1) 00:07:54.666 64.591 - 64.985: 99.9277% ( 1) 00:07:54.666 117.366 - 118.154: 99.9422% ( 1) 00:07:54.666 164.628 - 165.415: 99.9566% ( 1) 00:07:54.666 313.502 - 315.077: 99.9711% ( 1) 00:07:54.666 532.480 - 535.631: 99.9855% ( 1) 00:07:54.666 1367.434 - 1373.735: 100.0000% ( 1) 00:07:54.666 00:07:54.666 00:07:54.666 real 0m1.233s 00:07:54.666 user 0m1.090s 00:07:54.666 sys 0m0.092s 00:07:54.666 16:54:28 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.666 16:54:28 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:54.666 ************************************ 00:07:54.666 END TEST nvme_overhead 00:07:54.666 ************************************ 00:07:54.666 16:54:28 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:54.666 16:54:28 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:54.666 16:54:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.666 16:54:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:54.666 ************************************ 00:07:54.666 START TEST nvme_arbitration 00:07:54.666 ************************************ 00:07:54.666 16:54:28 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:57.960 Initializing NVMe Controllers 00:07:57.960 Attached to 0000:00:13.0 00:07:57.960 Attached to 0000:00:10.0 00:07:57.960 Attached to 0000:00:11.0 00:07:57.960 Attached to 0000:00:12.0 00:07:57.960 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:07:57.960 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:07:57.960 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:07:57.960 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:57.960 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:57.960 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:57.960 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:57.960 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:57.960 Initialization complete. Launching workers. 00:07:57.960 Starting thread on core 1 with urgent priority queue 00:07:57.960 Starting thread on core 2 with urgent priority queue 00:07:57.960 Starting thread on core 3 with urgent priority queue 00:07:57.960 Starting thread on core 0 with urgent priority queue 00:07:57.960 QEMU NVMe Ctrl (12343 ) core 0: 810.67 IO/s 123.36 secs/100000 ios 00:07:57.960 QEMU NVMe Ctrl (12342 ) core 0: 810.67 IO/s 123.36 secs/100000 ios 00:07:57.960 QEMU NVMe Ctrl (12340 ) core 1: 832.00 IO/s 120.19 secs/100000 ios 00:07:57.960 QEMU NVMe Ctrl (12342 ) core 1: 832.00 IO/s 120.19 secs/100000 ios 00:07:57.960 QEMU NVMe Ctrl (12341 ) core 2: 896.00 IO/s 111.61 secs/100000 ios 00:07:57.960 QEMU NVMe Ctrl (12342 ) core 3: 853.33 IO/s 117.19 secs/100000 ios 00:07:57.960 ======================================================== 00:07:57.960 00:07:57.960 00:07:57.960 real 0m3.304s 00:07:57.960 user 0m9.215s 00:07:57.960 sys 0m0.107s 00:07:57.960 16:54:32 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.960 ************************************ 00:07:57.960 END TEST nvme_arbitration 00:07:57.960 ************************************ 00:07:57.960 16:54:32 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:57.960 16:54:32 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:57.960 16:54:32 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:57.960 16:54:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.960 16:54:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:57.960 ************************************ 00:07:57.960 START TEST nvme_single_aen 00:07:57.960 ************************************ 00:07:57.960 16:54:32 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:58.221 Asynchronous Event Request test 00:07:58.221 Attached to 0000:00:13.0 00:07:58.221 Attached to 0000:00:10.0 00:07:58.221 Attached to 0000:00:11.0 00:07:58.221 Attached to 0000:00:12.0 00:07:58.221 Reset controller to setup AER completions for this process 00:07:58.221 Registering asynchronous event callbacks... 00:07:58.221 Getting orig temperature thresholds of all controllers 00:07:58.221 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:58.221 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:58.221 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:58.221 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:58.221 Setting all controllers temperature threshold low to trigger AER 00:07:58.221 Waiting for all controllers temperature threshold to be set lower 00:07:58.221 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:58.221 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:58.221 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:58.221 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:58.221 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:58.221 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:58.221 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:58.221 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:58.221 Waiting for all controllers to trigger AER and reset threshold 00:07:58.221 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:58.221 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:58.221 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:58.221 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:58.221 Cleaning up... 00:07:58.221 00:07:58.221 real 0m0.251s 00:07:58.221 user 0m0.101s 00:07:58.221 sys 0m0.104s 00:07:58.221 16:54:32 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.221 16:54:32 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:58.221 ************************************ 00:07:58.221 END TEST nvme_single_aen 00:07:58.221 ************************************ 00:07:58.221 16:54:32 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:58.221 16:54:32 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:58.221 16:54:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.221 16:54:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:58.221 ************************************ 00:07:58.221 START TEST nvme_doorbell_aers 00:07:58.221 ************************************ 00:07:58.221 16:54:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:58.221 16:54:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:58.221 16:54:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:58.221 16:54:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:58.221 16:54:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:58.221 16:54:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:58.221 16:54:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:58.221 16:54:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:58.221 16:54:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:58.221 16:54:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:58.221 16:54:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:58.221 16:54:32 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:58.221 16:54:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:58.221 16:54:32 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:58.482 [2024-12-05 16:54:32.717141] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63137) is not found. Dropping the request. 00:08:08.479 Executing: test_write_invalid_db 00:08:08.479 Waiting for AER completion... 00:08:08.479 Failure: test_write_invalid_db 00:08:08.479 00:08:08.479 Executing: test_invalid_db_write_overflow_sq 00:08:08.479 Waiting for AER completion... 00:08:08.479 Failure: test_invalid_db_write_overflow_sq 00:08:08.479 00:08:08.479 Executing: test_invalid_db_write_overflow_cq 00:08:08.479 Waiting for AER completion... 00:08:08.479 Failure: test_invalid_db_write_overflow_cq 00:08:08.479 00:08:08.479 16:54:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:08.479 16:54:42 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:08.479 [2024-12-05 16:54:42.731415] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63137) is not found. Dropping the request. 00:08:18.470 Executing: test_write_invalid_db 00:08:18.470 Waiting for AER completion... 00:08:18.470 Failure: test_write_invalid_db 00:08:18.470 00:08:18.470 Executing: test_invalid_db_write_overflow_sq 00:08:18.470 Waiting for AER completion... 00:08:18.470 Failure: test_invalid_db_write_overflow_sq 00:08:18.470 00:08:18.470 Executing: test_invalid_db_write_overflow_cq 00:08:18.470 Waiting for AER completion... 00:08:18.470 Failure: test_invalid_db_write_overflow_cq 00:08:18.470 00:08:18.470 16:54:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:18.470 16:54:52 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:18.470 [2024-12-05 16:54:52.754605] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63137) is not found. Dropping the request. 00:08:28.484 Executing: test_write_invalid_db 00:08:28.484 Waiting for AER completion... 00:08:28.484 Failure: test_write_invalid_db 00:08:28.484 00:08:28.484 Executing: test_invalid_db_write_overflow_sq 00:08:28.484 Waiting for AER completion... 00:08:28.484 Failure: test_invalid_db_write_overflow_sq 00:08:28.484 00:08:28.484 Executing: test_invalid_db_write_overflow_cq 00:08:28.484 Waiting for AER completion... 00:08:28.484 Failure: test_invalid_db_write_overflow_cq 00:08:28.484 00:08:28.484 16:55:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:28.484 16:55:02 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:28.484 [2024-12-05 16:55:02.787935] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63137) is not found. Dropping the request. 00:08:38.457 Executing: test_write_invalid_db 00:08:38.457 Waiting for AER completion... 00:08:38.457 Failure: test_write_invalid_db 00:08:38.457 00:08:38.457 Executing: test_invalid_db_write_overflow_sq 00:08:38.457 Waiting for AER completion... 00:08:38.457 Failure: test_invalid_db_write_overflow_sq 00:08:38.457 00:08:38.457 Executing: test_invalid_db_write_overflow_cq 00:08:38.457 Waiting for AER completion... 00:08:38.457 Failure: test_invalid_db_write_overflow_cq 00:08:38.457 00:08:38.457 00:08:38.457 real 0m40.208s 00:08:38.457 user 0m34.024s 00:08:38.457 sys 0m5.772s 00:08:38.457 16:55:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.457 16:55:12 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:38.457 ************************************ 00:08:38.457 END TEST nvme_doorbell_aers 00:08:38.457 ************************************ 00:08:38.457 16:55:12 nvme -- nvme/nvme.sh@97 -- # uname 00:08:38.457 16:55:12 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:38.457 16:55:12 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:38.457 16:55:12 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:38.457 16:55:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.457 16:55:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.457 ************************************ 00:08:38.457 START TEST nvme_multi_aen 00:08:38.457 ************************************ 00:08:38.457 16:55:12 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:38.722 [2024-12-05 16:55:12.827215] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63137) is not found. Dropping the request. 00:08:38.722 [2024-12-05 16:55:12.827274] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63137) is not found. Dropping the request. 00:08:38.722 [2024-12-05 16:55:12.827284] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63137) is not found. Dropping the request. 00:08:38.722 [2024-12-05 16:55:12.828808] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63137) is not found. Dropping the request. 00:08:38.722 [2024-12-05 16:55:12.828848] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63137) is not found. Dropping the request. 00:08:38.722 [2024-12-05 16:55:12.828857] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63137) is not found. Dropping the request. 00:08:38.722 [2024-12-05 16:55:12.829846] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63137) is not found. Dropping the request. 00:08:38.722 [2024-12-05 16:55:12.829875] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63137) is not found. Dropping the request. 00:08:38.722 [2024-12-05 16:55:12.829882] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63137) is not found. Dropping the request. 00:08:38.722 [2024-12-05 16:55:12.830843] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63137) is not found. Dropping the request. 00:08:38.722 [2024-12-05 16:55:12.830871] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63137) is not found. Dropping the request. 00:08:38.722 [2024-12-05 16:55:12.830878] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63137) is not found. Dropping the request. 00:08:38.722 Child process pid: 63658 00:08:38.722 [Child] Asynchronous Event Request test 00:08:38.722 [Child] Attached to 0000:00:13.0 00:08:38.722 [Child] Attached to 0000:00:10.0 00:08:38.722 [Child] Attached to 0000:00:11.0 00:08:38.722 [Child] Attached to 0000:00:12.0 00:08:38.722 [Child] Registering asynchronous event callbacks... 00:08:38.722 [Child] Getting orig temperature thresholds of all controllers 00:08:38.722 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:38.722 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:38.722 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:38.722 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:38.722 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:38.722 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:38.722 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:38.722 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:38.722 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:38.722 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:38.722 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:38.722 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:38.722 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:38.722 [Child] Cleaning up... 00:08:38.722 Asynchronous Event Request test 00:08:38.722 Attached to 0000:00:13.0 00:08:38.722 Attached to 0000:00:10.0 00:08:38.722 Attached to 0000:00:11.0 00:08:38.722 Attached to 0000:00:12.0 00:08:38.722 Reset controller to setup AER completions for this process 00:08:38.722 Registering asynchronous event callbacks... 00:08:38.722 Getting orig temperature thresholds of all controllers 00:08:38.722 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:38.722 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:38.722 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:38.722 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:38.722 Setting all controllers temperature threshold low to trigger AER 00:08:38.722 Waiting for all controllers temperature threshold to be set lower 00:08:38.722 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:38.722 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:38.722 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:38.722 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:38.722 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:38.722 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:38.722 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:38.722 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:38.722 Waiting for all controllers to trigger AER and reset threshold 00:08:38.722 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:38.722 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:38.722 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:38.722 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:38.722 Cleaning up... 00:08:38.980 00:08:38.980 real 0m0.436s 00:08:38.980 user 0m0.154s 00:08:38.980 sys 0m0.177s 00:08:38.980 16:55:13 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.980 16:55:13 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:38.980 ************************************ 00:08:38.980 END TEST nvme_multi_aen 00:08:38.980 ************************************ 00:08:38.980 16:55:13 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:38.980 16:55:13 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:38.980 16:55:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.980 16:55:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:38.980 ************************************ 00:08:38.980 START TEST nvme_startup 00:08:38.980 ************************************ 00:08:38.980 16:55:13 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:38.980 Initializing NVMe Controllers 00:08:38.981 Attached to 0000:00:13.0 00:08:38.981 Attached to 0000:00:10.0 00:08:38.981 Attached to 0000:00:11.0 00:08:38.981 Attached to 0000:00:12.0 00:08:38.981 Initialization complete. 00:08:38.981 Time used:130733.102 (us). 00:08:38.981 ************************************ 00:08:38.981 END TEST nvme_startup 00:08:38.981 ************************************ 00:08:38.981 00:08:38.981 real 0m0.187s 00:08:38.981 user 0m0.062s 00:08:38.981 sys 0m0.086s 00:08:38.981 16:55:13 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.981 16:55:13 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:38.981 16:55:13 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:38.981 16:55:13 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:38.981 16:55:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.981 16:55:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:39.238 ************************************ 00:08:39.238 START TEST nvme_multi_secondary 00:08:39.238 ************************************ 00:08:39.239 16:55:13 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:39.239 16:55:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63714 00:08:39.239 16:55:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63715 00:08:39.239 16:55:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:39.239 16:55:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:39.239 16:55:13 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:42.518 Initializing NVMe Controllers 00:08:42.518 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:42.518 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:42.518 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:42.518 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:42.518 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:42.518 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:42.518 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:42.518 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:42.518 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:42.518 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:42.518 Initialization complete. Launching workers. 00:08:42.518 ======================================================== 00:08:42.518 Latency(us) 00:08:42.518 Device Information : IOPS MiB/s Average min max 00:08:42.518 PCIE (0000:00:13.0) NSID 1 from core 1: 7184.23 28.06 2226.73 878.84 6170.70 00:08:42.518 PCIE (0000:00:10.0) NSID 1 from core 1: 7184.23 28.06 2225.85 852.79 6376.38 00:08:42.518 PCIE (0000:00:11.0) NSID 1 from core 1: 7184.23 28.06 2226.89 897.84 6326.36 00:08:42.518 PCIE (0000:00:12.0) NSID 1 from core 1: 7184.23 28.06 2226.96 884.04 6378.12 00:08:42.518 PCIE (0000:00:12.0) NSID 2 from core 1: 7184.23 28.06 2227.14 877.25 6511.21 00:08:42.518 PCIE (0000:00:12.0) NSID 3 from core 1: 7184.23 28.06 2227.29 870.58 6771.79 00:08:42.518 ======================================================== 00:08:42.518 Total : 43105.40 168.38 2226.81 852.79 6771.79 00:08:42.518 00:08:42.518 Initializing NVMe Controllers 00:08:42.518 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:42.518 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:42.518 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:42.518 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:42.518 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:42.518 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:42.518 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:42.518 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:42.518 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:42.518 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:42.518 Initialization complete. Launching workers. 00:08:42.518 ======================================================== 00:08:42.518 Latency(us) 00:08:42.518 Device Information : IOPS MiB/s Average min max 00:08:42.518 PCIE (0000:00:13.0) NSID 1 from core 2: 3156.02 12.33 5069.27 1388.33 12860.56 00:08:42.518 PCIE (0000:00:10.0) NSID 1 from core 2: 3156.02 12.33 5067.24 1075.65 12889.68 00:08:42.518 PCIE (0000:00:11.0) NSID 1 from core 2: 3156.02 12.33 5069.24 1209.48 12763.64 00:08:42.518 PCIE (0000:00:12.0) NSID 1 from core 2: 3156.02 12.33 5068.75 1145.67 12605.49 00:08:42.518 PCIE (0000:00:12.0) NSID 2 from core 2: 3156.02 12.33 5069.09 1083.51 12575.74 00:08:42.518 PCIE (0000:00:12.0) NSID 3 from core 2: 3156.02 12.33 5068.99 1063.12 12398.67 00:08:42.518 ======================================================== 00:08:42.518 Total : 18936.12 73.97 5068.76 1063.12 12889.68 00:08:42.518 00:08:42.518 16:55:16 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63714 00:08:44.421 Initializing NVMe Controllers 00:08:44.421 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:44.421 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:44.421 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:44.421 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:44.421 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:44.421 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:44.421 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:44.421 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:44.421 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:44.421 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:44.421 Initialization complete. Launching workers. 00:08:44.421 ======================================================== 00:08:44.421 Latency(us) 00:08:44.421 Device Information : IOPS MiB/s Average min max 00:08:44.421 PCIE (0000:00:13.0) NSID 1 from core 0: 10638.40 41.56 1503.62 692.09 6066.58 00:08:44.421 PCIE (0000:00:10.0) NSID 1 from core 0: 10638.40 41.56 1502.78 678.01 5864.48 00:08:44.421 PCIE (0000:00:11.0) NSID 1 from core 0: 10638.40 41.56 1503.60 690.22 5895.73 00:08:44.421 PCIE (0000:00:12.0) NSID 1 from core 0: 10638.40 41.56 1503.58 670.02 5741.82 00:08:44.421 PCIE (0000:00:12.0) NSID 2 from core 0: 10638.40 41.56 1503.57 644.20 5924.22 00:08:44.421 PCIE (0000:00:12.0) NSID 3 from core 0: 10638.40 41.56 1503.56 609.42 5868.24 00:08:44.421 ======================================================== 00:08:44.421 Total : 63830.41 249.34 1503.45 609.42 6066.58 00:08:44.421 00:08:44.421 16:55:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63715 00:08:44.421 16:55:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63784 00:08:44.421 16:55:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:44.421 16:55:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63785 00:08:44.421 16:55:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:44.421 16:55:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:47.705 Initializing NVMe Controllers 00:08:47.705 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:47.705 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:47.705 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:47.705 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:47.705 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:47.705 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:47.705 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:47.705 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:47.705 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:47.705 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:47.705 Initialization complete. Launching workers. 00:08:47.705 ======================================================== 00:08:47.705 Latency(us) 00:08:47.705 Device Information : IOPS MiB/s Average min max 00:08:47.705 PCIE (0000:00:13.0) NSID 1 from core 0: 7711.00 30.12 2074.54 758.21 12591.77 00:08:47.705 PCIE (0000:00:10.0) NSID 1 from core 0: 7711.00 30.12 2073.62 744.77 13276.82 00:08:47.705 PCIE (0000:00:11.0) NSID 1 from core 0: 7711.00 30.12 2074.54 759.88 13390.73 00:08:47.705 PCIE (0000:00:12.0) NSID 1 from core 0: 7711.00 30.12 2074.59 754.72 11763.18 00:08:47.705 PCIE (0000:00:12.0) NSID 2 from core 0: 7711.00 30.12 2074.82 760.82 12340.35 00:08:47.705 PCIE (0000:00:12.0) NSID 3 from core 0: 7711.00 30.12 2074.97 766.08 12275.05 00:08:47.705 ======================================================== 00:08:47.705 Total : 46265.97 180.73 2074.51 744.77 13390.73 00:08:47.705 00:08:47.705 Initializing NVMe Controllers 00:08:47.705 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:47.705 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:47.705 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:47.705 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:47.705 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:47.705 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:47.705 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:47.705 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:47.705 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:47.705 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:47.705 Initialization complete. Launching workers. 00:08:47.705 ======================================================== 00:08:47.705 Latency(us) 00:08:47.705 Device Information : IOPS MiB/s Average min max 00:08:47.705 PCIE (0000:00:13.0) NSID 1 from core 1: 7785.38 30.41 2054.76 762.74 6123.49 00:08:47.705 PCIE (0000:00:10.0) NSID 1 from core 1: 7785.38 30.41 2053.89 737.96 5975.47 00:08:47.705 PCIE (0000:00:11.0) NSID 1 from core 1: 7785.38 30.41 2054.88 771.32 6202.95 00:08:47.705 PCIE (0000:00:12.0) NSID 1 from core 1: 7785.38 30.41 2054.93 752.95 5810.84 00:08:47.705 PCIE (0000:00:12.0) NSID 2 from core 1: 7785.38 30.41 2054.98 768.64 5205.25 00:08:47.705 PCIE (0000:00:12.0) NSID 3 from core 1: 7785.38 30.41 2055.04 762.26 5287.73 00:08:47.705 ======================================================== 00:08:47.705 Total : 46712.26 182.47 2054.75 737.96 6202.95 00:08:47.705 00:08:49.675 Initializing NVMe Controllers 00:08:49.675 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:49.675 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:49.675 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:49.675 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:49.675 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:49.675 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:49.675 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:49.675 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:49.675 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:49.675 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:49.675 Initialization complete. Launching workers. 00:08:49.675 ======================================================== 00:08:49.675 Latency(us) 00:08:49.675 Device Information : IOPS MiB/s Average min max 00:08:49.675 PCIE (0000:00:13.0) NSID 1 from core 2: 4514.29 17.63 3543.77 800.80 12604.95 00:08:49.675 PCIE (0000:00:10.0) NSID 1 from core 2: 4514.29 17.63 3543.11 770.13 12145.79 00:08:49.675 PCIE (0000:00:11.0) NSID 1 from core 2: 4514.29 17.63 3543.84 786.15 12198.57 00:08:49.675 PCIE (0000:00:12.0) NSID 1 from core 2: 4514.29 17.63 3543.61 774.31 12840.09 00:08:49.675 PCIE (0000:00:12.0) NSID 2 from core 2: 4514.29 17.63 3543.74 713.30 13414.53 00:08:49.675 PCIE (0000:00:12.0) NSID 3 from core 2: 4514.29 17.63 3543.51 640.21 12814.09 00:08:49.675 ======================================================== 00:08:49.675 Total : 27085.76 105.80 3543.60 640.21 13414.53 00:08:49.675 00:08:49.675 ************************************ 00:08:49.675 END TEST nvme_multi_secondary 00:08:49.675 ************************************ 00:08:49.675 16:55:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63784 00:08:49.675 16:55:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63785 00:08:49.675 00:08:49.675 real 0m10.590s 00:08:49.675 user 0m18.410s 00:08:49.675 sys 0m0.615s 00:08:49.675 16:55:23 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:49.675 16:55:23 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:49.675 16:55:23 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:49.675 16:55:23 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:49.675 16:55:23 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/62729 ]] 00:08:49.675 16:55:23 nvme -- common/autotest_common.sh@1094 -- # kill 62729 00:08:49.675 16:55:23 nvme -- common/autotest_common.sh@1095 -- # wait 62729 00:08:49.675 [2024-12-05 16:55:23.978315] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63657) is not found. Dropping the request. 00:08:49.675 [2024-12-05 16:55:23.978383] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63657) is not found. Dropping the request. 00:08:49.675 [2024-12-05 16:55:23.978408] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63657) is not found. Dropping the request. 00:08:49.675 [2024-12-05 16:55:23.978425] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63657) is not found. Dropping the request. 00:08:49.675 [2024-12-05 16:55:23.980765] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63657) is not found. Dropping the request. 00:08:49.675 [2024-12-05 16:55:23.980912] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63657) is not found. Dropping the request. 00:08:49.675 [2024-12-05 16:55:23.981046] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63657) is not found. Dropping the request. 00:08:49.675 [2024-12-05 16:55:23.981142] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63657) is not found. Dropping the request. 00:08:49.676 [2024-12-05 16:55:23.982838] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63657) is not found. Dropping the request. 00:08:49.676 [2024-12-05 16:55:23.982986] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63657) is not found. Dropping the request. 00:08:49.676 [2024-12-05 16:55:23.983094] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63657) is not found. Dropping the request. 00:08:49.676 [2024-12-05 16:55:23.983137] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63657) is not found. Dropping the request. 00:08:49.676 [2024-12-05 16:55:23.984911] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63657) is not found. Dropping the request. 00:08:49.676 [2024-12-05 16:55:23.984959] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63657) is not found. Dropping the request. 00:08:49.676 [2024-12-05 16:55:23.984972] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63657) is not found. Dropping the request. 00:08:49.676 [2024-12-05 16:55:23.984985] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63657) is not found. Dropping the request. 00:08:49.935 16:55:24 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:49.935 16:55:24 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:49.935 16:55:24 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:49.935 16:55:24 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:49.935 16:55:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:49.935 16:55:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:49.935 ************************************ 00:08:49.935 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:49.935 ************************************ 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:49.935 * Looking for test storage... 00:08:49.935 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:49.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.935 --rc genhtml_branch_coverage=1 00:08:49.935 --rc genhtml_function_coverage=1 00:08:49.935 --rc genhtml_legend=1 00:08:49.935 --rc geninfo_all_blocks=1 00:08:49.935 --rc geninfo_unexecuted_blocks=1 00:08:49.935 00:08:49.935 ' 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:49.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.935 --rc genhtml_branch_coverage=1 00:08:49.935 --rc genhtml_function_coverage=1 00:08:49.935 --rc genhtml_legend=1 00:08:49.935 --rc geninfo_all_blocks=1 00:08:49.935 --rc geninfo_unexecuted_blocks=1 00:08:49.935 00:08:49.935 ' 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:49.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.935 --rc genhtml_branch_coverage=1 00:08:49.935 --rc genhtml_function_coverage=1 00:08:49.935 --rc genhtml_legend=1 00:08:49.935 --rc geninfo_all_blocks=1 00:08:49.935 --rc geninfo_unexecuted_blocks=1 00:08:49.935 00:08:49.935 ' 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:49.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.935 --rc genhtml_branch_coverage=1 00:08:49.935 --rc genhtml_function_coverage=1 00:08:49.935 --rc genhtml_legend=1 00:08:49.935 --rc geninfo_all_blocks=1 00:08:49.935 --rc geninfo_unexecuted_blocks=1 00:08:49.935 00:08:49.935 ' 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:49.935 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:50.195 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:50.195 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:50.195 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:50.195 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:50.195 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:50.195 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=63941 00:08:50.195 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:50.195 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:50.195 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 63941 00:08:50.195 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 63941 ']' 00:08:50.195 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.195 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:50.195 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.195 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:50.195 16:55:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:50.195 [2024-12-05 16:55:24.402855] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:08:50.195 [2024-12-05 16:55:24.402988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63941 ] 00:08:50.453 [2024-12-05 16:55:24.565715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:50.453 [2024-12-05 16:55:24.665897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.453 [2024-12-05 16:55:24.666191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.453 [2024-12-05 16:55:24.666451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.453 [2024-12-05 16:55:24.666553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.023 16:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:51.023 16:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:51.023 16:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:51.023 16:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.023 16:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:51.023 nvme0n1 00:08:51.023 16:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.023 16:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:51.023 16:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_ZlOR5.txt 00:08:51.023 16:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:51.023 16:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.023 16:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:51.023 true 00:08:51.023 16:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.023 16:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:51.023 16:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733417725 00:08:51.023 16:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=63964 00:08:51.023 16:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:51.023 16:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:51.023 16:55:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:53.557 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:53.557 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.557 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:53.557 [2024-12-05 16:55:27.349723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:53.557 [2024-12-05 16:55:27.350198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:53.557 [2024-12-05 16:55:27.350230] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:53.557 [2024-12-05 16:55:27.350241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.557 [2024-12-05 16:55:27.351649] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:53.557 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.557 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 63964 00:08:53.557 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 63964 00:08:53.557 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 63964 00:08:53.557 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:53.557 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:53.557 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:53.557 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.557 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:53.557 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_ZlOR5.txt 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_ZlOR5.txt 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 63941 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 63941 ']' 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 63941 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63941 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:53.558 killing process with pid 63941 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63941' 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 63941 00:08:53.558 16:55:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 63941 00:08:54.492 16:55:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:54.492 16:55:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:54.492 00:08:54.492 real 0m4.505s 00:08:54.492 user 0m15.969s 00:08:54.492 sys 0m0.513s 00:08:54.492 16:55:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.492 16:55:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:54.492 ************************************ 00:08:54.492 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:54.492 ************************************ 00:08:54.492 16:55:28 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:54.492 16:55:28 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:54.492 16:55:28 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:54.492 16:55:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.492 16:55:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:54.492 ************************************ 00:08:54.492 START TEST nvme_fio 00:08:54.492 ************************************ 00:08:54.492 16:55:28 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:54.492 16:55:28 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:54.492 16:55:28 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:54.492 16:55:28 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:54.492 16:55:28 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:54.493 16:55:28 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:54.493 16:55:28 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:54.493 16:55:28 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:54.493 16:55:28 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:54.493 16:55:28 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:54.493 16:55:28 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:54.493 16:55:28 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:54.493 16:55:28 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:54.493 16:55:28 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:54.493 16:55:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:54.493 16:55:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:54.749 16:55:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:54.749 16:55:28 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:55.006 16:55:29 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:55.006 16:55:29 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:55.006 16:55:29 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:55.007 16:55:29 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:55.007 16:55:29 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:55.007 16:55:29 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:55.007 16:55:29 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:55.007 16:55:29 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:55.007 16:55:29 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:55.007 16:55:29 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:55.007 16:55:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:55.007 16:55:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:55.007 16:55:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:55.007 16:55:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:55.007 16:55:29 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:55.007 16:55:29 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:55.007 16:55:29 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:55.007 16:55:29 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:55.265 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:55.265 fio-3.35 00:08:55.265 Starting 1 thread 00:09:00.529 00:09:00.529 test: (groupid=0, jobs=1): err= 0: pid=64107: Thu Dec 5 16:55:34 2024 00:09:00.529 read: IOPS=17.8k, BW=69.4MiB/s (72.8MB/s)(141MiB/2031msec) 00:09:00.529 slat (nsec): min=3278, max=94621, avg=5093.91, stdev=2345.76 00:09:00.529 clat (usec): min=787, max=43478, avg=3077.76, stdev=1566.44 00:09:00.529 lat (usec): min=791, max=43483, avg=3082.85, stdev=1566.91 00:09:00.529 clat percentiles (usec): 00:09:00.529 | 1.00th=[ 1254], 5.00th=[ 1713], 10.00th=[ 2180], 20.00th=[ 2409], 00:09:00.529 | 30.00th=[ 2507], 40.00th=[ 2606], 50.00th=[ 2704], 60.00th=[ 2835], 00:09:00.529 | 70.00th=[ 3064], 80.00th=[ 3556], 90.00th=[ 4686], 95.00th=[ 5538], 00:09:00.529 | 99.00th=[ 6849], 99.50th=[ 7504], 99.90th=[13173], 99.95th=[39584], 00:09:00.529 | 99.99th=[42206] 00:09:00.529 bw ( KiB/s): min=48864, max=86520, per=100.00%, avg=72132.00, stdev=16326.17, samples=4 00:09:00.529 iops : min=12216, max=21630, avg=18033.00, stdev=4081.54, samples=4 00:09:00.529 write: IOPS=17.8k, BW=69.5MiB/s (72.9MB/s)(141MiB/2031msec); 0 zone resets 00:09:00.529 slat (nsec): min=3449, max=75309, avg=5335.23, stdev=2408.96 00:09:00.529 clat (usec): min=758, max=74214, avg=4094.50, stdev=5397.07 00:09:00.529 lat (usec): min=763, max=74219, avg=4099.83, stdev=5397.25 00:09:00.529 clat percentiles (usec): 00:09:00.529 | 1.00th=[ 1385], 5.00th=[ 1975], 10.00th=[ 2278], 20.00th=[ 2442], 00:09:00.529 | 30.00th=[ 2540], 40.00th=[ 2638], 50.00th=[ 2769], 60.00th=[ 2900], 00:09:00.529 | 70.00th=[ 3195], 80.00th=[ 3851], 90.00th=[ 5276], 95.00th=[ 6718], 00:09:00.529 | 99.00th=[32375], 99.50th=[34866], 99.90th=[56886], 99.95th=[65274], 00:09:00.529 | 99.99th=[71828] 00:09:00.529 bw ( KiB/s): min=49024, max=86920, per=100.00%, avg=72110.00, stdev=16327.23, samples=4 00:09:00.529 iops : min=12256, max=21730, avg=18027.50, stdev=4081.81, samples=4 00:09:00.529 lat (usec) : 1000=0.04% 00:09:00.529 lat (msec) : 2=6.31%, 4=77.06%, 10=14.52%, 20=0.34%, 50=1.67% 00:09:00.529 lat (msec) : 100=0.07% 00:09:00.529 cpu : usr=99.11%, sys=0.10%, ctx=9, majf=0, minf=607 00:09:00.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:00.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:00.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:00.529 issued rwts: total=36101,36147,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:00.529 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:00.529 00:09:00.529 Run status group 0 (all jobs): 00:09:00.529 READ: bw=69.4MiB/s (72.8MB/s), 69.4MiB/s-69.4MiB/s (72.8MB/s-72.8MB/s), io=141MiB (148MB), run=2031-2031msec 00:09:00.529 WRITE: bw=69.5MiB/s (72.9MB/s), 69.5MiB/s-69.5MiB/s (72.9MB/s-72.9MB/s), io=141MiB (148MB), run=2031-2031msec 00:09:00.529 ----------------------------------------------------- 00:09:00.529 Suppressions used: 00:09:00.529 count bytes template 00:09:00.529 1 32 /usr/src/fio/parse.c 00:09:00.529 1 8 libtcmalloc_minimal.so 00:09:00.529 ----------------------------------------------------- 00:09:00.529 00:09:00.529 16:55:34 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:00.529 16:55:34 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:00.529 16:55:34 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:00.529 16:55:34 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:00.529 16:55:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:00.529 16:55:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:00.529 16:55:34 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:00.529 16:55:34 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:00.529 16:55:34 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:00.529 16:55:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:00.529 16:55:34 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:00.529 16:55:34 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:00.529 16:55:34 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:00.529 16:55:34 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:00.529 16:55:34 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:00.529 16:55:34 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:00.529 16:55:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:00.529 16:55:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:00.529 16:55:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:00.529 16:55:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:00.529 16:55:34 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:00.529 16:55:34 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:00.529 16:55:34 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:00.529 16:55:34 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:00.786 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:00.786 fio-3.35 00:09:00.786 Starting 1 thread 00:09:07.345 00:09:07.345 test: (groupid=0, jobs=1): err= 0: pid=64163: Thu Dec 5 16:55:41 2024 00:09:07.345 read: IOPS=20.4k, BW=79.5MiB/s (83.4MB/s)(159MiB/2001msec) 00:09:07.345 slat (nsec): min=3406, max=72362, avg=5293.85, stdev=2585.82 00:09:07.345 clat (usec): min=288, max=8683, avg=3120.72, stdev=937.81 00:09:07.345 lat (usec): min=293, max=8725, avg=3126.02, stdev=939.04 00:09:07.345 clat percentiles (usec): 00:09:07.345 | 1.00th=[ 1844], 5.00th=[ 2311], 10.00th=[ 2409], 20.00th=[ 2540], 00:09:07.345 | 30.00th=[ 2606], 40.00th=[ 2704], 50.00th=[ 2802], 60.00th=[ 2900], 00:09:07.345 | 70.00th=[ 3130], 80.00th=[ 3523], 90.00th=[ 4555], 95.00th=[ 5276], 00:09:07.345 | 99.00th=[ 6456], 99.50th=[ 6783], 99.90th=[ 7373], 99.95th=[ 7767], 00:09:07.345 | 99.99th=[ 8586] 00:09:07.346 bw ( KiB/s): min=81992, max=84504, per=100.00%, avg=83096.00, stdev=1283.30, samples=3 00:09:07.346 iops : min=20498, max=21126, avg=20774.00, stdev=320.82, samples=3 00:09:07.346 write: IOPS=20.3k, BW=79.3MiB/s (83.2MB/s)(159MiB/2001msec); 0 zone resets 00:09:07.346 slat (nsec): min=3527, max=73909, avg=5515.84, stdev=2685.40 00:09:07.346 clat (usec): min=200, max=8618, avg=3151.16, stdev=952.11 00:09:07.346 lat (usec): min=205, max=8631, avg=3156.68, stdev=953.39 00:09:07.346 clat percentiles (usec): 00:09:07.346 | 1.00th=[ 1893], 5.00th=[ 2311], 10.00th=[ 2442], 20.00th=[ 2540], 00:09:07.346 | 30.00th=[ 2638], 40.00th=[ 2737], 50.00th=[ 2802], 60.00th=[ 2933], 00:09:07.346 | 70.00th=[ 3130], 80.00th=[ 3556], 90.00th=[ 4555], 95.00th=[ 5342], 00:09:07.346 | 99.00th=[ 6521], 99.50th=[ 6849], 99.90th=[ 7373], 99.95th=[ 7767], 00:09:07.346 | 99.99th=[ 8455] 00:09:07.346 bw ( KiB/s): min=82128, max=84440, per=100.00%, avg=83144.00, stdev=1181.16, samples=3 00:09:07.346 iops : min=20532, max=21110, avg=20786.00, stdev=295.29, samples=3 00:09:07.346 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.02% 00:09:07.346 lat (msec) : 2=1.36%, 4=83.80%, 10=14.80% 00:09:07.346 cpu : usr=99.05%, sys=0.05%, ctx=5, majf=0, minf=606 00:09:07.346 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:07.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:07.346 issued rwts: total=40729,40626,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.346 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:07.346 00:09:07.346 Run status group 0 (all jobs): 00:09:07.346 READ: bw=79.5MiB/s (83.4MB/s), 79.5MiB/s-79.5MiB/s (83.4MB/s-83.4MB/s), io=159MiB (167MB), run=2001-2001msec 00:09:07.346 WRITE: bw=79.3MiB/s (83.2MB/s), 79.3MiB/s-79.3MiB/s (83.2MB/s-83.2MB/s), io=159MiB (166MB), run=2001-2001msec 00:09:07.346 ----------------------------------------------------- 00:09:07.346 Suppressions used: 00:09:07.346 count bytes template 00:09:07.346 1 32 /usr/src/fio/parse.c 00:09:07.346 1 8 libtcmalloc_minimal.so 00:09:07.346 ----------------------------------------------------- 00:09:07.346 00:09:07.346 16:55:41 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:07.346 16:55:41 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:07.346 16:55:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:07.346 16:55:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:07.346 16:55:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:07.346 16:55:41 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:07.605 16:55:41 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:07.605 16:55:41 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:07.605 16:55:41 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:07.605 16:55:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:07.605 16:55:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:07.605 16:55:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:07.605 16:55:41 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:07.605 16:55:41 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:07.605 16:55:41 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:07.605 16:55:41 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:07.605 16:55:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:07.605 16:55:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:07.605 16:55:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:07.605 16:55:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:07.605 16:55:41 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:07.605 16:55:41 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:07.605 16:55:41 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:07.605 16:55:41 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:07.864 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:07.864 fio-3.35 00:09:07.864 Starting 1 thread 00:09:14.422 00:09:14.422 test: (groupid=0, jobs=1): err= 0: pid=64224: Thu Dec 5 16:55:48 2024 00:09:14.422 read: IOPS=19.8k, BW=77.5MiB/s (81.3MB/s)(155MiB/2001msec) 00:09:14.422 slat (nsec): min=4204, max=58930, avg=5428.26, stdev=2746.80 00:09:14.422 clat (usec): min=217, max=10722, avg=3208.06, stdev=1045.44 00:09:14.422 lat (usec): min=221, max=10734, avg=3213.48, stdev=1046.78 00:09:14.422 clat percentiles (usec): 00:09:14.422 | 1.00th=[ 1844], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2540], 00:09:14.422 | 30.00th=[ 2638], 40.00th=[ 2704], 50.00th=[ 2802], 60.00th=[ 2933], 00:09:14.422 | 70.00th=[ 3195], 80.00th=[ 3785], 90.00th=[ 4752], 95.00th=[ 5538], 00:09:14.422 | 99.00th=[ 6718], 99.50th=[ 7242], 99.90th=[ 8717], 99.95th=[ 9765], 00:09:14.422 | 99.99th=[10421] 00:09:14.422 bw ( KiB/s): min=73520, max=86064, per=100.00%, avg=80053.33, stdev=6288.31, samples=3 00:09:14.422 iops : min=18380, max=21516, avg=20013.33, stdev=1572.08, samples=3 00:09:14.422 write: IOPS=19.8k, BW=77.3MiB/s (81.0MB/s)(155MiB/2001msec); 0 zone resets 00:09:14.422 slat (nsec): min=4270, max=90755, avg=5609.13, stdev=2835.62 00:09:14.422 clat (usec): min=233, max=10760, avg=3229.75, stdev=1049.69 00:09:14.422 lat (usec): min=237, max=10772, avg=3235.35, stdev=1051.00 00:09:14.422 clat percentiles (usec): 00:09:14.422 | 1.00th=[ 1844], 5.00th=[ 2376], 10.00th=[ 2474], 20.00th=[ 2573], 00:09:14.422 | 30.00th=[ 2638], 40.00th=[ 2737], 50.00th=[ 2802], 60.00th=[ 2933], 00:09:14.422 | 70.00th=[ 3228], 80.00th=[ 3818], 90.00th=[ 4817], 95.00th=[ 5538], 00:09:14.422 | 99.00th=[ 6718], 99.50th=[ 7308], 99.90th=[ 9241], 99.95th=[10290], 00:09:14.422 | 99.99th=[10552] 00:09:14.422 bw ( KiB/s): min=73496, max=86080, per=100.00%, avg=80090.67, stdev=6313.80, samples=3 00:09:14.422 iops : min=18374, max=21520, avg=20022.67, stdev=1578.45, samples=3 00:09:14.422 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.02% 00:09:14.422 lat (msec) : 2=1.53%, 4=80.55%, 10=17.82%, 20=0.05% 00:09:14.422 cpu : usr=99.05%, sys=0.10%, ctx=6, majf=0, minf=606 00:09:14.422 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:14.422 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.422 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:14.422 issued rwts: total=39701,39587,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.422 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:14.422 00:09:14.422 Run status group 0 (all jobs): 00:09:14.423 READ: bw=77.5MiB/s (81.3MB/s), 77.5MiB/s-77.5MiB/s (81.3MB/s-81.3MB/s), io=155MiB (163MB), run=2001-2001msec 00:09:14.423 WRITE: bw=77.3MiB/s (81.0MB/s), 77.3MiB/s-77.3MiB/s (81.0MB/s-81.0MB/s), io=155MiB (162MB), run=2001-2001msec 00:09:14.423 ----------------------------------------------------- 00:09:14.423 Suppressions used: 00:09:14.423 count bytes template 00:09:14.423 1 32 /usr/src/fio/parse.c 00:09:14.423 1 8 libtcmalloc_minimal.so 00:09:14.423 ----------------------------------------------------- 00:09:14.423 00:09:14.423 16:55:48 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:14.423 16:55:48 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:14.423 16:55:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:14.423 16:55:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:14.681 16:55:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:14.681 16:55:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:14.939 16:55:49 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:14.939 16:55:49 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:14.939 16:55:49 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:14.939 16:55:49 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:14.939 16:55:49 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:14.939 16:55:49 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:14.939 16:55:49 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:14.939 16:55:49 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:14.939 16:55:49 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:14.939 16:55:49 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:14.939 16:55:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:14.939 16:55:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:14.939 16:55:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:14.939 16:55:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:14.939 16:55:49 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:14.939 16:55:49 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:14.939 16:55:49 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:14.940 16:55:49 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:14.940 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:14.940 fio-3.35 00:09:14.940 Starting 1 thread 00:09:27.212 00:09:27.212 test: (groupid=0, jobs=1): err= 0: pid=64279: Thu Dec 5 16:55:59 2024 00:09:27.212 read: IOPS=21.3k, BW=83.2MiB/s (87.3MB/s)(167MiB/2001msec) 00:09:27.212 slat (nsec): min=3420, max=78712, avg=5184.37, stdev=2676.82 00:09:27.212 clat (usec): min=278, max=11330, avg=2999.47, stdev=982.88 00:09:27.212 lat (usec): min=282, max=11335, avg=3004.65, stdev=984.37 00:09:27.212 clat percentiles (usec): 00:09:27.212 | 1.00th=[ 2024], 5.00th=[ 2343], 10.00th=[ 2409], 20.00th=[ 2507], 00:09:27.212 | 30.00th=[ 2540], 40.00th=[ 2606], 50.00th=[ 2671], 60.00th=[ 2769], 00:09:27.212 | 70.00th=[ 2868], 80.00th=[ 3097], 90.00th=[ 4113], 95.00th=[ 5342], 00:09:27.212 | 99.00th=[ 6915], 99.50th=[ 7308], 99.90th=[ 8979], 99.95th=[ 9372], 00:09:27.212 | 99.99th=[10945] 00:09:27.212 bw ( KiB/s): min=81056, max=88856, per=98.50%, avg=83949.33, stdev=4272.02, samples=3 00:09:27.212 iops : min=20264, max=22214, avg=20987.33, stdev=1068.00, samples=3 00:09:27.212 write: IOPS=21.2k, BW=82.6MiB/s (86.7MB/s)(165MiB/2001msec); 0 zone resets 00:09:27.212 slat (nsec): min=3492, max=70969, avg=5343.76, stdev=2530.83 00:09:27.212 clat (usec): min=230, max=11405, avg=3004.48, stdev=964.60 00:09:27.212 lat (usec): min=234, max=11419, avg=3009.83, stdev=966.01 00:09:27.212 clat percentiles (usec): 00:09:27.212 | 1.00th=[ 2024], 5.00th=[ 2376], 10.00th=[ 2442], 20.00th=[ 2507], 00:09:27.212 | 30.00th=[ 2573], 40.00th=[ 2606], 50.00th=[ 2671], 60.00th=[ 2769], 00:09:27.212 | 70.00th=[ 2900], 80.00th=[ 3130], 90.00th=[ 4080], 95.00th=[ 5342], 00:09:27.212 | 99.00th=[ 6915], 99.50th=[ 7242], 99.90th=[ 8717], 99.95th=[ 9110], 00:09:27.212 | 99.99th=[10552] 00:09:27.212 bw ( KiB/s): min=81576, max=88904, per=99.31%, avg=84034.67, stdev=4217.03, samples=3 00:09:27.212 iops : min=20394, max=22226, avg=21008.67, stdev=1054.26, samples=3 00:09:27.212 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.02% 00:09:27.212 lat (msec) : 2=0.89%, 4=88.45%, 10=10.59%, 20=0.02% 00:09:27.212 cpu : usr=99.15%, sys=0.10%, ctx=5, majf=0, minf=604 00:09:27.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:27.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:27.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:27.212 issued rwts: total=42633,42331,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:27.212 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:27.212 00:09:27.212 Run status group 0 (all jobs): 00:09:27.212 READ: bw=83.2MiB/s (87.3MB/s), 83.2MiB/s-83.2MiB/s (87.3MB/s-87.3MB/s), io=167MiB (175MB), run=2001-2001msec 00:09:27.212 WRITE: bw=82.6MiB/s (86.7MB/s), 82.6MiB/s-82.6MiB/s (86.7MB/s-86.7MB/s), io=165MiB (173MB), run=2001-2001msec 00:09:27.212 ----------------------------------------------------- 00:09:27.212 Suppressions used: 00:09:27.212 count bytes template 00:09:27.212 1 32 /usr/src/fio/parse.c 00:09:27.212 1 8 libtcmalloc_minimal.so 00:09:27.212 ----------------------------------------------------- 00:09:27.212 00:09:27.212 16:56:00 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:27.212 16:56:00 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:27.212 00:09:27.212 real 0m31.362s 00:09:27.212 user 0m17.426s 00:09:27.212 sys 0m25.765s 00:09:27.212 16:56:00 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.212 ************************************ 00:09:27.212 END TEST nvme_fio 00:09:27.212 16:56:00 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:27.212 ************************************ 00:09:27.212 00:09:27.212 real 1m41.583s 00:09:27.212 user 3m38.759s 00:09:27.212 sys 0m36.780s 00:09:27.212 16:56:00 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.212 16:56:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:27.212 ************************************ 00:09:27.212 END TEST nvme 00:09:27.212 ************************************ 00:09:27.212 16:56:00 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:27.212 16:56:00 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:27.212 16:56:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:27.212 16:56:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.212 16:56:00 -- common/autotest_common.sh@10 -- # set +x 00:09:27.212 ************************************ 00:09:27.212 START TEST nvme_scc 00:09:27.212 ************************************ 00:09:27.212 16:56:00 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:27.212 * Looking for test storage... 00:09:27.212 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:27.212 16:56:00 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:27.212 16:56:00 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:27.212 16:56:00 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:27.212 16:56:00 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:27.212 16:56:00 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.212 16:56:00 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:27.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.212 --rc genhtml_branch_coverage=1 00:09:27.212 --rc genhtml_function_coverage=1 00:09:27.212 --rc genhtml_legend=1 00:09:27.212 --rc geninfo_all_blocks=1 00:09:27.212 --rc geninfo_unexecuted_blocks=1 00:09:27.212 00:09:27.212 ' 00:09:27.212 16:56:00 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:27.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.212 --rc genhtml_branch_coverage=1 00:09:27.212 --rc genhtml_function_coverage=1 00:09:27.212 --rc genhtml_legend=1 00:09:27.212 --rc geninfo_all_blocks=1 00:09:27.212 --rc geninfo_unexecuted_blocks=1 00:09:27.212 00:09:27.212 ' 00:09:27.212 16:56:00 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:27.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.212 --rc genhtml_branch_coverage=1 00:09:27.212 --rc genhtml_function_coverage=1 00:09:27.212 --rc genhtml_legend=1 00:09:27.212 --rc geninfo_all_blocks=1 00:09:27.212 --rc geninfo_unexecuted_blocks=1 00:09:27.212 00:09:27.212 ' 00:09:27.212 16:56:00 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:27.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.212 --rc genhtml_branch_coverage=1 00:09:27.212 --rc genhtml_function_coverage=1 00:09:27.212 --rc genhtml_legend=1 00:09:27.212 --rc geninfo_all_blocks=1 00:09:27.212 --rc geninfo_unexecuted_blocks=1 00:09:27.212 00:09:27.212 ' 00:09:27.212 16:56:00 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:27.212 16:56:00 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:27.212 16:56:00 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:27.212 16:56:00 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:27.212 16:56:00 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:27.212 16:56:00 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:27.212 16:56:00 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.212 16:56:00 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.213 16:56:00 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.213 16:56:00 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:27.213 16:56:00 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:27.213 16:56:00 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:27.213 16:56:00 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:27.213 16:56:00 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:27.213 16:56:00 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:27.213 16:56:00 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:27.213 16:56:00 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:27.213 16:56:00 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:27.213 16:56:00 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:27.213 16:56:00 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:27.213 16:56:00 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:27.213 16:56:00 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:27.213 16:56:00 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:27.213 16:56:00 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:27.213 16:56:00 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:27.213 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:27.213 Waiting for block devices as requested 00:09:27.213 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:27.213 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:27.213 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:27.213 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:32.565 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:32.565 16:56:06 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:32.565 16:56:06 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:32.565 16:56:06 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:32.565 16:56:06 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.565 16:56:06 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.565 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:32.566 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.567 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.568 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.569 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.570 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.571 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:32.572 16:56:06 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:32.572 16:56:06 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:32.572 16:56:06 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.572 16:56:06 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.572 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.573 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:32.574 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.575 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.576 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.577 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:32.578 16:56:06 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:32.578 16:56:06 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:32.578 16:56:06 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.578 16:56:06 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.578 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.579 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:32.580 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.581 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.582 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.583 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.584 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.585 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.586 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:32.587 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.588 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.589 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:32.590 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:32.591 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.592 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:32.593 16:56:06 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:32.593 16:56:06 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:32.593 16:56:06 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:32.593 16:56:06 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:32.593 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:32.594 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.595 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:32.596 16:56:06 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:32.596 16:56:06 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:32.597 16:56:06 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:32.597 16:56:06 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:32.597 16:56:06 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:32.597 16:56:06 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:32.597 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:33.169 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.169 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.169 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.430 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:33.430 16:56:07 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:33.430 16:56:07 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:33.430 16:56:07 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.430 16:56:07 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:33.430 ************************************ 00:09:33.430 START TEST nvme_simple_copy 00:09:33.430 ************************************ 00:09:33.430 16:56:07 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:33.692 Initializing NVMe Controllers 00:09:33.692 Attaching to 0000:00:10.0 00:09:33.692 Controller supports SCC. Attached to 0000:00:10.0 00:09:33.692 Namespace ID: 1 size: 6GB 00:09:33.692 Initialization complete. 00:09:33.692 00:09:33.692 Controller QEMU NVMe Ctrl (12340 ) 00:09:33.692 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:33.692 Namespace Block Size:4096 00:09:33.692 Writing LBAs 0 to 63 with Random Data 00:09:33.692 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:33.692 LBAs matching Written Data: 64 00:09:33.692 00:09:33.692 real 0m0.286s 00:09:33.692 user 0m0.054s 00:09:33.692 sys 0m0.031s 00:09:33.692 16:56:07 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.692 ************************************ 00:09:33.692 END TEST nvme_simple_copy 00:09:33.692 ************************************ 00:09:33.692 16:56:07 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:33.692 00:09:33.692 real 0m7.840s 00:09:33.692 user 0m1.077s 00:09:33.692 sys 0m1.422s 00:09:33.692 16:56:07 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.692 ************************************ 00:09:33.692 END TEST nvme_scc 00:09:33.692 ************************************ 00:09:33.692 16:56:07 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:33.692 16:56:08 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:33.692 16:56:08 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:33.692 16:56:08 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:33.692 16:56:08 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:33.692 16:56:08 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:33.692 16:56:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.692 16:56:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.692 16:56:08 -- common/autotest_common.sh@10 -- # set +x 00:09:33.692 ************************************ 00:09:33.692 START TEST nvme_fdp 00:09:33.692 ************************************ 00:09:33.692 16:56:08 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:33.955 * Looking for test storage... 00:09:33.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:33.955 16:56:08 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:33.955 16:56:08 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:33.955 16:56:08 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:33.955 16:56:08 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:33.955 16:56:08 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.955 16:56:08 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:33.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.955 --rc genhtml_branch_coverage=1 00:09:33.955 --rc genhtml_function_coverage=1 00:09:33.955 --rc genhtml_legend=1 00:09:33.955 --rc geninfo_all_blocks=1 00:09:33.955 --rc geninfo_unexecuted_blocks=1 00:09:33.955 00:09:33.955 ' 00:09:33.955 16:56:08 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:33.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.955 --rc genhtml_branch_coverage=1 00:09:33.955 --rc genhtml_function_coverage=1 00:09:33.955 --rc genhtml_legend=1 00:09:33.955 --rc geninfo_all_blocks=1 00:09:33.955 --rc geninfo_unexecuted_blocks=1 00:09:33.955 00:09:33.955 ' 00:09:33.955 16:56:08 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:33.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.955 --rc genhtml_branch_coverage=1 00:09:33.955 --rc genhtml_function_coverage=1 00:09:33.955 --rc genhtml_legend=1 00:09:33.955 --rc geninfo_all_blocks=1 00:09:33.955 --rc geninfo_unexecuted_blocks=1 00:09:33.955 00:09:33.955 ' 00:09:33.955 16:56:08 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:33.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.955 --rc genhtml_branch_coverage=1 00:09:33.955 --rc genhtml_function_coverage=1 00:09:33.955 --rc genhtml_legend=1 00:09:33.955 --rc geninfo_all_blocks=1 00:09:33.955 --rc geninfo_unexecuted_blocks=1 00:09:33.955 00:09:33.955 ' 00:09:33.955 16:56:08 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:33.955 16:56:08 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:33.955 16:56:08 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:33.955 16:56:08 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:33.955 16:56:08 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.955 16:56:08 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.956 16:56:08 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.956 16:56:08 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.956 16:56:08 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.956 16:56:08 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:33.956 16:56:08 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.956 16:56:08 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:33.956 16:56:08 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:33.956 16:56:08 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:33.956 16:56:08 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:33.956 16:56:08 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:33.956 16:56:08 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:33.956 16:56:08 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:33.956 16:56:08 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:33.956 16:56:08 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:33.956 16:56:08 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:33.956 16:56:08 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:34.218 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:34.479 Waiting for block devices as requested 00:09:34.479 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:34.479 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:34.741 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:34.741 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:40.055 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:40.055 16:56:14 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:40.055 16:56:14 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:40.055 16:56:14 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:40.055 16:56:14 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:40.055 16:56:14 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.055 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:40.056 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:40.057 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:40.058 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.059 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.060 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:40.061 16:56:14 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:40.061 16:56:14 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:40.061 16:56:14 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:40.061 16:56:14 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:40.061 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:40.062 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:40.063 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:40.064 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.065 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.066 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:40.067 16:56:14 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:40.068 16:56:14 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:40.068 16:56:14 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:40.068 16:56:14 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:40.068 16:56:14 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.068 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:40.069 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:40.070 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.071 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:40.072 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.073 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.074 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:40.075 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.076 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.077 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.078 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.079 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:40.080 16:56:14 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:40.080 16:56:14 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:40.080 16:56:14 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:40.080 16:56:14 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.080 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:40.081 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:40.082 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:40.083 16:56:14 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:40.345 16:56:14 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:40.345 16:56:14 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:40.345 16:56:14 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:40.345 16:56:14 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:40.345 16:56:14 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:40.607 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:41.179 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:41.179 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:41.179 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:41.179 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:41.441 16:56:15 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:41.441 16:56:15 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:41.441 16:56:15 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.441 16:56:15 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:41.441 ************************************ 00:09:41.441 START TEST nvme_flexible_data_placement 00:09:41.441 ************************************ 00:09:41.441 16:56:15 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:41.702 Initializing NVMe Controllers 00:09:41.702 Attaching to 0000:00:13.0 00:09:41.702 Controller supports FDP Attached to 0000:00:13.0 00:09:41.702 Namespace ID: 1 Endurance Group ID: 1 00:09:41.702 Initialization complete. 00:09:41.702 00:09:41.702 ================================== 00:09:41.702 == FDP tests for Namespace: #01 == 00:09:41.702 ================================== 00:09:41.702 00:09:41.702 Get Feature: FDP: 00:09:41.702 ================= 00:09:41.702 Enabled: Yes 00:09:41.702 FDP configuration Index: 0 00:09:41.702 00:09:41.702 FDP configurations log page 00:09:41.702 =========================== 00:09:41.702 Number of FDP configurations: 1 00:09:41.702 Version: 0 00:09:41.702 Size: 112 00:09:41.702 FDP Configuration Descriptor: 0 00:09:41.702 Descriptor Size: 96 00:09:41.702 Reclaim Group Identifier format: 2 00:09:41.702 FDP Volatile Write Cache: Not Present 00:09:41.702 FDP Configuration: Valid 00:09:41.702 Vendor Specific Size: 0 00:09:41.702 Number of Reclaim Groups: 2 00:09:41.702 Number of Recalim Unit Handles: 8 00:09:41.702 Max Placement Identifiers: 128 00:09:41.702 Number of Namespaces Suppprted: 256 00:09:41.702 Reclaim unit Nominal Size: 6000000 bytes 00:09:41.702 Estimated Reclaim Unit Time Limit: Not Reported 00:09:41.702 RUH Desc #000: RUH Type: Initially Isolated 00:09:41.702 RUH Desc #001: RUH Type: Initially Isolated 00:09:41.702 RUH Desc #002: RUH Type: Initially Isolated 00:09:41.702 RUH Desc #003: RUH Type: Initially Isolated 00:09:41.702 RUH Desc #004: RUH Type: Initially Isolated 00:09:41.702 RUH Desc #005: RUH Type: Initially Isolated 00:09:41.702 RUH Desc #006: RUH Type: Initially Isolated 00:09:41.702 RUH Desc #007: RUH Type: Initially Isolated 00:09:41.702 00:09:41.702 FDP reclaim unit handle usage log page 00:09:41.702 ====================================== 00:09:41.702 Number of Reclaim Unit Handles: 8 00:09:41.702 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:41.702 RUH Usage Desc #001: RUH Attributes: Unused 00:09:41.702 RUH Usage Desc #002: RUH Attributes: Unused 00:09:41.702 RUH Usage Desc #003: RUH Attributes: Unused 00:09:41.702 RUH Usage Desc #004: RUH Attributes: Unused 00:09:41.702 RUH Usage Desc #005: RUH Attributes: Unused 00:09:41.702 RUH Usage Desc #006: RUH Attributes: Unused 00:09:41.702 RUH Usage Desc #007: RUH Attributes: Unused 00:09:41.702 00:09:41.702 FDP statistics log page 00:09:41.702 ======================= 00:09:41.702 Host bytes with metadata written: 929718272 00:09:41.702 Media bytes with metadata written: 929964032 00:09:41.702 Media bytes erased: 0 00:09:41.702 00:09:41.702 FDP Reclaim unit handle status 00:09:41.702 ============================== 00:09:41.702 Number of RUHS descriptors: 2 00:09:41.702 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000495a 00:09:41.702 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:41.702 00:09:41.702 FDP write on placement id: 0 success 00:09:41.702 00:09:41.702 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:41.702 00:09:41.702 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:41.702 00:09:41.702 Get Feature: FDP Events for Placement handle: #0 00:09:41.702 ======================== 00:09:41.702 Number of FDP Events: 6 00:09:41.702 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:41.702 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:41.702 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:41.702 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:41.702 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:41.702 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:41.702 00:09:41.702 FDP events log page 00:09:41.703 =================== 00:09:41.703 Number of FDP events: 1 00:09:41.703 FDP Event #0: 00:09:41.703 Event Type: RU Not Written to Capacity 00:09:41.703 Placement Identifier: Valid 00:09:41.703 NSID: Valid 00:09:41.703 Location: Valid 00:09:41.703 Placement Identifier: 0 00:09:41.703 Event Timestamp: 7 00:09:41.703 Namespace Identifier: 1 00:09:41.703 Reclaim Group Identifier: 0 00:09:41.703 Reclaim Unit Handle Identifier: 0 00:09:41.703 00:09:41.703 FDP test passed 00:09:41.703 00:09:41.703 real 0m0.250s 00:09:41.703 user 0m0.075s 00:09:41.703 sys 0m0.073s 00:09:41.703 16:56:15 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.703 ************************************ 00:09:41.703 END TEST nvme_flexible_data_placement 00:09:41.703 ************************************ 00:09:41.703 16:56:15 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:41.703 00:09:41.703 real 0m7.879s 00:09:41.703 user 0m1.113s 00:09:41.703 sys 0m1.487s 00:09:41.703 16:56:15 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.703 ************************************ 00:09:41.703 END TEST nvme_fdp 00:09:41.703 ************************************ 00:09:41.703 16:56:15 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:41.703 16:56:15 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:41.703 16:56:15 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:41.703 16:56:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:41.703 16:56:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.703 16:56:15 -- common/autotest_common.sh@10 -- # set +x 00:09:41.703 ************************************ 00:09:41.703 START TEST nvme_rpc 00:09:41.703 ************************************ 00:09:41.703 16:56:15 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:41.703 * Looking for test storage... 00:09:41.703 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:41.703 16:56:16 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:41.703 16:56:16 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:41.703 16:56:16 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:41.963 16:56:16 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:41.963 16:56:16 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:41.963 16:56:16 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:41.963 16:56:16 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:41.963 16:56:16 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:41.963 16:56:16 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:41.963 16:56:16 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:41.963 16:56:16 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:41.963 16:56:16 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:41.963 16:56:16 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:41.963 16:56:16 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:41.963 16:56:16 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:41.963 16:56:16 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:41.963 16:56:16 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:41.963 16:56:16 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:41.963 16:56:16 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:41.963 16:56:16 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:41.963 16:56:16 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:41.963 16:56:16 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:41.963 16:56:16 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:41.963 16:56:16 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:41.963 16:56:16 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:41.963 16:56:16 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:41.963 16:56:16 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:41.964 16:56:16 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:41.964 16:56:16 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:41.964 16:56:16 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:41.964 16:56:16 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:41.964 16:56:16 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:41.964 16:56:16 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:41.964 16:56:16 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:41.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.964 --rc genhtml_branch_coverage=1 00:09:41.964 --rc genhtml_function_coverage=1 00:09:41.964 --rc genhtml_legend=1 00:09:41.964 --rc geninfo_all_blocks=1 00:09:41.964 --rc geninfo_unexecuted_blocks=1 00:09:41.964 00:09:41.964 ' 00:09:41.964 16:56:16 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:41.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.964 --rc genhtml_branch_coverage=1 00:09:41.964 --rc genhtml_function_coverage=1 00:09:41.964 --rc genhtml_legend=1 00:09:41.964 --rc geninfo_all_blocks=1 00:09:41.964 --rc geninfo_unexecuted_blocks=1 00:09:41.964 00:09:41.964 ' 00:09:41.964 16:56:16 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:41.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.964 --rc genhtml_branch_coverage=1 00:09:41.964 --rc genhtml_function_coverage=1 00:09:41.964 --rc genhtml_legend=1 00:09:41.964 --rc geninfo_all_blocks=1 00:09:41.964 --rc geninfo_unexecuted_blocks=1 00:09:41.964 00:09:41.964 ' 00:09:41.964 16:56:16 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:41.964 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:41.964 --rc genhtml_branch_coverage=1 00:09:41.964 --rc genhtml_function_coverage=1 00:09:41.964 --rc genhtml_legend=1 00:09:41.964 --rc geninfo_all_blocks=1 00:09:41.964 --rc geninfo_unexecuted_blocks=1 00:09:41.964 00:09:41.964 ' 00:09:41.964 16:56:16 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:41.964 16:56:16 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:41.964 16:56:16 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:41.964 16:56:16 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:41.964 16:56:16 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:41.964 16:56:16 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:41.964 16:56:16 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:41.964 16:56:16 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:41.964 16:56:16 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:41.964 16:56:16 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:41.964 16:56:16 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:41.964 16:56:16 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:41.964 16:56:16 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:41.964 16:56:16 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:41.964 16:56:16 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:41.964 16:56:16 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65671 00:09:41.964 16:56:16 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:41.964 16:56:16 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:41.964 16:56:16 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65671 00:09:41.964 16:56:16 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65671 ']' 00:09:41.964 16:56:16 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.964 16:56:16 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.964 16:56:16 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.964 16:56:16 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.964 16:56:16 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:41.964 [2024-12-05 16:56:16.296848] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:41.964 [2024-12-05 16:56:16.297015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65671 ] 00:09:42.225 [2024-12-05 16:56:16.459710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:42.225 [2024-12-05 16:56:16.585082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.225 [2024-12-05 16:56:16.585082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.167 16:56:17 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.167 16:56:17 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:43.167 16:56:17 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:43.427 Nvme0n1 00:09:43.427 16:56:17 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:43.427 16:56:17 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:43.427 request: 00:09:43.427 { 00:09:43.427 "bdev_name": "Nvme0n1", 00:09:43.427 "filename": "non_existing_file", 00:09:43.427 "method": "bdev_nvme_apply_firmware", 00:09:43.427 "req_id": 1 00:09:43.427 } 00:09:43.427 Got JSON-RPC error response 00:09:43.427 response: 00:09:43.427 { 00:09:43.427 "code": -32603, 00:09:43.427 "message": "open file failed." 00:09:43.427 } 00:09:43.427 16:56:17 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:43.427 16:56:17 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:43.427 16:56:17 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:43.687 16:56:17 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:43.687 16:56:17 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65671 00:09:43.687 16:56:17 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65671 ']' 00:09:43.687 16:56:17 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65671 00:09:43.687 16:56:17 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:43.687 16:56:17 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:43.687 16:56:18 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65671 00:09:43.687 killing process with pid 65671 00:09:43.687 16:56:18 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:43.687 16:56:18 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:43.687 16:56:18 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65671' 00:09:43.687 16:56:18 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65671 00:09:43.687 16:56:18 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65671 00:09:45.603 00:09:45.603 real 0m3.637s 00:09:45.603 user 0m6.761s 00:09:45.603 sys 0m0.656s 00:09:45.603 16:56:19 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.603 ************************************ 00:09:45.603 END TEST nvme_rpc 00:09:45.603 ************************************ 00:09:45.603 16:56:19 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:45.603 16:56:19 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:45.603 16:56:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:45.603 16:56:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.603 16:56:19 -- common/autotest_common.sh@10 -- # set +x 00:09:45.603 ************************************ 00:09:45.603 START TEST nvme_rpc_timeouts 00:09:45.603 ************************************ 00:09:45.603 16:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:45.603 * Looking for test storage... 00:09:45.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:45.603 16:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:45.603 16:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:09:45.603 16:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:45.603 16:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:45.603 16:56:19 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:45.603 16:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.603 16:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:45.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.603 --rc genhtml_branch_coverage=1 00:09:45.603 --rc genhtml_function_coverage=1 00:09:45.603 --rc genhtml_legend=1 00:09:45.603 --rc geninfo_all_blocks=1 00:09:45.603 --rc geninfo_unexecuted_blocks=1 00:09:45.603 00:09:45.603 ' 00:09:45.603 16:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:45.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.603 --rc genhtml_branch_coverage=1 00:09:45.603 --rc genhtml_function_coverage=1 00:09:45.603 --rc genhtml_legend=1 00:09:45.603 --rc geninfo_all_blocks=1 00:09:45.603 --rc geninfo_unexecuted_blocks=1 00:09:45.603 00:09:45.603 ' 00:09:45.603 16:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:45.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.603 --rc genhtml_branch_coverage=1 00:09:45.603 --rc genhtml_function_coverage=1 00:09:45.603 --rc genhtml_legend=1 00:09:45.603 --rc geninfo_all_blocks=1 00:09:45.603 --rc geninfo_unexecuted_blocks=1 00:09:45.603 00:09:45.603 ' 00:09:45.603 16:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:45.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.603 --rc genhtml_branch_coverage=1 00:09:45.603 --rc genhtml_function_coverage=1 00:09:45.603 --rc genhtml_legend=1 00:09:45.603 --rc geninfo_all_blocks=1 00:09:45.603 --rc geninfo_unexecuted_blocks=1 00:09:45.603 00:09:45.603 ' 00:09:45.603 16:56:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:45.603 16:56:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65736 00:09:45.603 16:56:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65736 00:09:45.603 16:56:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65768 00:09:45.603 16:56:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:45.603 16:56:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65768 00:09:45.603 16:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 65768 ']' 00:09:45.603 16:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.603 16:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.603 16:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.603 16:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.603 16:56:19 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:45.603 16:56:19 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:45.603 [2024-12-05 16:56:19.936790] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:09:45.603 [2024-12-05 16:56:19.936966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65768 ] 00:09:45.864 [2024-12-05 16:56:20.099472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:46.125 [2024-12-05 16:56:20.230444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.125 [2024-12-05 16:56:20.230466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.695 16:56:20 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.695 Checking default timeout settings: 00:09:46.695 16:56:20 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:46.695 16:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:46.695 16:56:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:46.974 Making settings changes with rpc: 00:09:46.974 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:46.974 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:47.283 Check default vs. modified settings: 00:09:47.283 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:47.283 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65736 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65736 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:47.560 Setting action_on_timeout is changed as expected. 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65736 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65736 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:47.560 Setting timeout_us is changed as expected. 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65736 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65736 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:47.560 Setting timeout_admin_us is changed as expected. 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65736 /tmp/settings_modified_65736 00:09:47.560 16:56:21 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65768 00:09:47.560 16:56:21 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 65768 ']' 00:09:47.560 16:56:21 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 65768 00:09:47.560 16:56:21 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:47.560 16:56:21 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.560 16:56:21 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65768 00:09:47.560 16:56:21 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.560 16:56:21 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.560 killing process with pid 65768 00:09:47.560 16:56:21 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65768' 00:09:47.560 16:56:21 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 65768 00:09:47.560 16:56:21 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 65768 00:09:48.949 RPC TIMEOUT SETTING TEST PASSED. 00:09:48.949 16:56:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:48.949 00:09:48.949 real 0m3.356s 00:09:48.949 user 0m6.422s 00:09:48.949 sys 0m0.616s 00:09:48.949 16:56:23 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.949 16:56:23 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:48.949 ************************************ 00:09:48.949 END TEST nvme_rpc_timeouts 00:09:48.949 ************************************ 00:09:48.949 16:56:23 -- spdk/autotest.sh@239 -- # uname -s 00:09:48.949 16:56:23 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:48.949 16:56:23 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:48.949 16:56:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:48.949 16:56:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.949 16:56:23 -- common/autotest_common.sh@10 -- # set +x 00:09:48.949 ************************************ 00:09:48.949 START TEST sw_hotplug 00:09:48.949 ************************************ 00:09:48.949 16:56:23 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:48.949 * Looking for test storage... 00:09:48.949 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:48.949 16:56:23 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:48.949 16:56:23 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:48.949 16:56:23 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:09:48.949 16:56:23 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:48.949 16:56:23 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:48.949 16:56:23 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:48.949 16:56:23 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:48.949 16:56:23 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.949 16:56:23 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:48.949 16:56:23 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:48.949 16:56:23 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:48.949 16:56:23 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:48.949 16:56:23 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:48.949 16:56:23 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:48.949 16:56:23 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:48.950 16:56:23 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:48.950 16:56:23 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:48.950 16:56:23 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:48.950 16:56:23 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.950 16:56:23 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:48.950 16:56:23 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:48.950 16:56:23 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.950 16:56:23 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:48.950 16:56:23 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:48.950 16:56:23 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:48.950 16:56:23 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:48.950 16:56:23 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.950 16:56:23 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:48.950 16:56:23 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:48.950 16:56:23 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:48.950 16:56:23 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:48.950 16:56:23 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:48.950 16:56:23 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.950 16:56:23 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:48.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.950 --rc genhtml_branch_coverage=1 00:09:48.950 --rc genhtml_function_coverage=1 00:09:48.950 --rc genhtml_legend=1 00:09:48.950 --rc geninfo_all_blocks=1 00:09:48.950 --rc geninfo_unexecuted_blocks=1 00:09:48.950 00:09:48.950 ' 00:09:48.950 16:56:23 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:48.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.950 --rc genhtml_branch_coverage=1 00:09:48.950 --rc genhtml_function_coverage=1 00:09:48.950 --rc genhtml_legend=1 00:09:48.950 --rc geninfo_all_blocks=1 00:09:48.950 --rc geninfo_unexecuted_blocks=1 00:09:48.950 00:09:48.950 ' 00:09:48.950 16:56:23 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:48.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.950 --rc genhtml_branch_coverage=1 00:09:48.950 --rc genhtml_function_coverage=1 00:09:48.950 --rc genhtml_legend=1 00:09:48.950 --rc geninfo_all_blocks=1 00:09:48.950 --rc geninfo_unexecuted_blocks=1 00:09:48.950 00:09:48.950 ' 00:09:48.950 16:56:23 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:48.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.950 --rc genhtml_branch_coverage=1 00:09:48.950 --rc genhtml_function_coverage=1 00:09:48.950 --rc genhtml_legend=1 00:09:48.950 --rc geninfo_all_blocks=1 00:09:48.950 --rc geninfo_unexecuted_blocks=1 00:09:48.950 00:09:48.950 ' 00:09:48.950 16:56:23 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:49.212 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:49.474 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:49.474 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:49.474 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:49.474 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:49.474 16:56:23 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:49.474 16:56:23 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:49.474 16:56:23 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:49.474 16:56:23 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:49.474 16:56:23 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:49.474 16:56:23 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:49.474 16:56:23 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:49.474 16:56:23 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:49.735 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:49.995 Waiting for block devices as requested 00:09:49.995 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:49.995 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:50.256 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:50.256 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:55.533 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:55.533 16:56:29 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:55.533 16:56:29 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:55.794 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:55.794 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:55.794 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:56.055 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:56.317 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:56.317 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:56.578 16:56:30 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:56.578 16:56:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:56.578 16:56:30 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:56.578 16:56:30 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:56.578 16:56:30 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66629 00:09:56.578 16:56:30 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:56.578 16:56:30 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:56.579 16:56:30 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:56.579 16:56:30 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:56.579 16:56:30 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:56.579 16:56:30 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:56.579 16:56:30 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:56.579 16:56:30 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:56.579 16:56:30 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:56.579 16:56:30 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:56.579 16:56:30 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:56.579 16:56:30 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:56.579 16:56:30 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:56.579 16:56:30 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:56.844 Initializing NVMe Controllers 00:09:56.844 Attaching to 0000:00:10.0 00:09:56.844 Attaching to 0000:00:11.0 00:09:56.844 Attached to 0000:00:11.0 00:09:56.844 Attached to 0000:00:10.0 00:09:56.844 Initialization complete. Starting I/O... 00:09:56.844 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:56.844 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:56.844 00:09:57.794 QEMU NVMe Ctrl (12341 ): 2276 I/Os completed (+2276) 00:09:57.794 QEMU NVMe Ctrl (12340 ): 2276 I/Os completed (+2276) 00:09:57.794 00:09:58.736 QEMU NVMe Ctrl (12341 ): 5336 I/Os completed (+3060) 00:09:58.736 QEMU NVMe Ctrl (12340 ): 5336 I/Os completed (+3060) 00:09:58.736 00:09:59.681 QEMU NVMe Ctrl (12341 ): 8136 I/Os completed (+2800) 00:09:59.681 QEMU NVMe Ctrl (12340 ): 8137 I/Os completed (+2801) 00:09:59.681 00:10:01.068 QEMU NVMe Ctrl (12341 ): 11800 I/Os completed (+3664) 00:10:01.068 QEMU NVMe Ctrl (12340 ): 11792 I/Os completed (+3655) 00:10:01.068 00:10:02.008 QEMU NVMe Ctrl (12341 ): 15642 I/Os completed (+3842) 00:10:02.008 QEMU NVMe Ctrl (12340 ): 15648 I/Os completed (+3856) 00:10:02.008 00:10:02.577 16:56:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:02.577 16:56:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:02.577 16:56:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:02.577 [2024-12-05 16:56:36.822522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:02.577 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:02.577 [2024-12-05 16:56:36.823609] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.577 [2024-12-05 16:56:36.823658] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.577 [2024-12-05 16:56:36.823674] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.577 [2024-12-05 16:56:36.823688] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.577 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:02.577 [2024-12-05 16:56:36.825219] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.577 [2024-12-05 16:56:36.825262] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.577 [2024-12-05 16:56:36.825273] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.577 [2024-12-05 16:56:36.825285] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.577 16:56:36 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:02.577 16:56:36 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:02.577 [2024-12-05 16:56:36.843870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:02.577 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:02.577 [2024-12-05 16:56:36.844740] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.577 [2024-12-05 16:56:36.844873] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.577 [2024-12-05 16:56:36.844895] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.577 [2024-12-05 16:56:36.844911] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.577 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:02.577 [2024-12-05 16:56:36.846260] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.577 [2024-12-05 16:56:36.846291] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.577 [2024-12-05 16:56:36.846303] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.577 [2024-12-05 16:56:36.846313] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:02.577 16:56:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:02.577 16:56:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:02.577 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:02.577 EAL: Scan for (pci) bus failed. 00:10:02.577 16:56:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:02.577 16:56:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:02.577 16:56:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:02.838 16:56:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:02.838 16:56:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:02.839 16:56:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:02.839 16:56:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:02.839 16:56:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:02.839 Attaching to 0000:00:10.0 00:10:02.839 Attached to 0000:00:10.0 00:10:02.839 QEMU NVMe Ctrl (12340 ): 64 I/Os completed (+64) 00:10:02.839 00:10:02.839 16:56:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:02.839 16:56:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:02.839 16:56:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:02.839 Attaching to 0000:00:11.0 00:10:02.839 Attached to 0000:00:11.0 00:10:03.783 QEMU NVMe Ctrl (12340 ): 3827 I/Os completed (+3763) 00:10:03.783 QEMU NVMe Ctrl (12341 ): 3480 I/Os completed (+3480) 00:10:03.783 00:10:04.725 QEMU NVMe Ctrl (12340 ): 7590 I/Os completed (+3763) 00:10:04.725 QEMU NVMe Ctrl (12341 ): 7243 I/Os completed (+3763) 00:10:04.725 00:10:05.667 QEMU NVMe Ctrl (12340 ): 11345 I/Os completed (+3755) 00:10:05.667 QEMU NVMe Ctrl (12341 ): 10997 I/Os completed (+3754) 00:10:05.667 00:10:07.050 QEMU NVMe Ctrl (12340 ): 15059 I/Os completed (+3714) 00:10:07.050 QEMU NVMe Ctrl (12341 ): 14700 I/Os completed (+3703) 00:10:07.050 00:10:08.044 QEMU NVMe Ctrl (12340 ): 18847 I/Os completed (+3788) 00:10:08.044 QEMU NVMe Ctrl (12341 ): 18498 I/Os completed (+3798) 00:10:08.044 00:10:09.011 QEMU NVMe Ctrl (12340 ): 22521 I/Os completed (+3674) 00:10:09.011 QEMU NVMe Ctrl (12341 ): 22188 I/Os completed (+3690) 00:10:09.011 00:10:09.954 QEMU NVMe Ctrl (12340 ): 26153 I/Os completed (+3632) 00:10:09.954 QEMU NVMe Ctrl (12341 ): 25821 I/Os completed (+3633) 00:10:09.954 00:10:10.900 QEMU NVMe Ctrl (12340 ): 28841 I/Os completed (+2688) 00:10:10.900 QEMU NVMe Ctrl (12341 ): 28506 I/Os completed (+2685) 00:10:10.900 00:10:11.837 QEMU NVMe Ctrl (12340 ): 31481 I/Os completed (+2640) 00:10:11.837 QEMU NVMe Ctrl (12341 ): 31152 I/Os completed (+2646) 00:10:11.837 00:10:12.778 QEMU NVMe Ctrl (12340 ): 35128 I/Os completed (+3647) 00:10:12.778 QEMU NVMe Ctrl (12341 ): 34792 I/Os completed (+3640) 00:10:12.778 00:10:13.721 QEMU NVMe Ctrl (12340 ): 38854 I/Os completed (+3726) 00:10:13.721 QEMU NVMe Ctrl (12341 ): 38520 I/Os completed (+3728) 00:10:13.721 00:10:15.101 QEMU NVMe Ctrl (12340 ): 41990 I/Os completed (+3136) 00:10:15.101 QEMU NVMe Ctrl (12341 ): 41663 I/Os completed (+3143) 00:10:15.101 00:10:15.101 16:56:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:15.101 16:56:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:15.101 16:56:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:15.101 16:56:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:15.101 [2024-12-05 16:56:49.104162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:15.101 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:15.101 [2024-12-05 16:56:49.105703] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.101 [2024-12-05 16:56:49.105883] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.101 [2024-12-05 16:56:49.105926] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.101 [2024-12-05 16:56:49.106018] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.101 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:15.101 [2024-12-05 16:56:49.108170] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.101 [2024-12-05 16:56:49.108338] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.101 [2024-12-05 16:56:49.108377] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.101 [2024-12-05 16:56:49.108444] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.101 16:56:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:15.101 16:56:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:15.101 [2024-12-05 16:56:49.126209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:15.101 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:15.101 [2024-12-05 16:56:49.127576] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.101 [2024-12-05 16:56:49.127718] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.101 [2024-12-05 16:56:49.127761] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.101 [2024-12-05 16:56:49.127789] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.101 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:15.101 [2024-12-05 16:56:49.130163] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.101 [2024-12-05 16:56:49.130228] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.101 [2024-12-05 16:56:49.130248] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.101 [2024-12-05 16:56:49.130264] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.101 16:56:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:15.101 16:56:49 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:15.101 16:56:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:15.101 16:56:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:15.101 16:56:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:15.101 16:56:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:15.101 16:56:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:15.101 16:56:49 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:15.101 16:56:49 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:15.101 16:56:49 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:15.101 Attaching to 0000:00:10.0 00:10:15.101 Attached to 0000:00:10.0 00:10:15.101 16:56:49 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:15.101 16:56:49 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:15.101 16:56:49 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:15.101 Attaching to 0000:00:11.0 00:10:15.101 Attached to 0000:00:11.0 00:10:15.670 QEMU NVMe Ctrl (12340 ): 1904 I/Os completed (+1904) 00:10:15.670 QEMU NVMe Ctrl (12341 ): 1696 I/Os completed (+1696) 00:10:15.670 00:10:17.051 QEMU NVMe Ctrl (12340 ): 4709 I/Os completed (+2805) 00:10:17.051 QEMU NVMe Ctrl (12341 ): 4501 I/Os completed (+2805) 00:10:17.051 00:10:17.992 QEMU NVMe Ctrl (12340 ): 7915 I/Os completed (+3206) 00:10:17.992 QEMU NVMe Ctrl (12341 ): 7705 I/Os completed (+3204) 00:10:17.992 00:10:18.927 QEMU NVMe Ctrl (12340 ): 11607 I/Os completed (+3692) 00:10:18.927 QEMU NVMe Ctrl (12341 ): 11394 I/Os completed (+3689) 00:10:18.927 00:10:19.863 QEMU NVMe Ctrl (12340 ): 14913 I/Os completed (+3306) 00:10:19.863 QEMU NVMe Ctrl (12341 ): 14708 I/Os completed (+3314) 00:10:19.863 00:10:20.797 QEMU NVMe Ctrl (12340 ): 18434 I/Os completed (+3521) 00:10:20.797 QEMU NVMe Ctrl (12341 ): 18222 I/Os completed (+3514) 00:10:20.797 00:10:21.736 QEMU NVMe Ctrl (12340 ): 22085 I/Os completed (+3651) 00:10:21.736 QEMU NVMe Ctrl (12341 ): 21899 I/Os completed (+3677) 00:10:21.736 00:10:22.680 QEMU NVMe Ctrl (12340 ): 25033 I/Os completed (+2948) 00:10:22.680 QEMU NVMe Ctrl (12341 ): 24828 I/Os completed (+2929) 00:10:22.680 00:10:24.066 QEMU NVMe Ctrl (12340 ): 28060 I/Os completed (+3027) 00:10:24.066 QEMU NVMe Ctrl (12341 ): 27876 I/Os completed (+3048) 00:10:24.066 00:10:25.010 QEMU NVMe Ctrl (12340 ): 31474 I/Os completed (+3414) 00:10:25.010 QEMU NVMe Ctrl (12341 ): 31286 I/Os completed (+3410) 00:10:25.010 00:10:25.953 QEMU NVMe Ctrl (12340 ): 35258 I/Os completed (+3784) 00:10:25.953 QEMU NVMe Ctrl (12341 ): 35063 I/Os completed (+3777) 00:10:25.953 00:10:26.897 QEMU NVMe Ctrl (12340 ): 38399 I/Os completed (+3141) 00:10:26.897 QEMU NVMe Ctrl (12341 ): 38188 I/Os completed (+3125) 00:10:26.897 00:10:27.158 16:57:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:27.158 16:57:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:27.158 16:57:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:27.158 16:57:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:27.158 [2024-12-05 16:57:01.435765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:27.158 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:27.158 [2024-12-05 16:57:01.436874] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.158 [2024-12-05 16:57:01.437018] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.158 [2024-12-05 16:57:01.437108] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.158 [2024-12-05 16:57:01.437128] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.158 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:27.158 [2024-12-05 16:57:01.438885] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.158 [2024-12-05 16:57:01.438932] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.158 [2024-12-05 16:57:01.438945] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.158 [2024-12-05 16:57:01.439085] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.158 16:57:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:27.158 16:57:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:27.158 [2024-12-05 16:57:01.456892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:27.158 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:27.158 [2024-12-05 16:57:01.457839] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.158 [2024-12-05 16:57:01.457934] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.158 [2024-12-05 16:57:01.458021] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.158 [2024-12-05 16:57:01.458048] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.158 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:27.158 [2024-12-05 16:57:01.459464] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.158 [2024-12-05 16:57:01.459498] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.158 [2024-12-05 16:57:01.459512] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.158 [2024-12-05 16:57:01.459523] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.158 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/subsystem_vendor 00:10:27.158 EAL: Scan for (pci) bus failed. 00:10:27.158 16:57:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:27.158 16:57:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:27.419 16:57:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:27.420 16:57:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:27.420 16:57:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:27.420 16:57:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:27.420 16:57:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:27.420 16:57:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:27.420 16:57:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:27.420 16:57:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:27.420 Attaching to 0000:00:10.0 00:10:27.420 Attached to 0000:00:10.0 00:10:27.420 16:57:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:27.420 16:57:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:27.420 16:57:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:27.420 Attaching to 0000:00:11.0 00:10:27.420 Attached to 0000:00:11.0 00:10:27.420 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:27.420 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:27.420 [2024-12-05 16:57:01.685538] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:39.652 16:57:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:39.652 16:57:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:39.652 16:57:13 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.86 00:10:39.652 16:57:13 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.86 00:10:39.652 16:57:13 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:39.652 16:57:13 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.86 00:10:39.652 16:57:13 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.86 2 00:10:39.652 remove_attach_helper took 42.86s to complete (handling 2 nvme drive(s)) 16:57:13 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:46.237 16:57:19 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66629 00:10:46.237 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66629) - No such process 00:10:46.237 16:57:19 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66629 00:10:46.237 16:57:19 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:46.237 16:57:19 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:46.237 16:57:19 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:46.237 16:57:19 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67177 00:10:46.237 16:57:19 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:46.237 16:57:19 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:46.237 16:57:19 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67177 00:10:46.237 16:57:19 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67177 ']' 00:10:46.237 16:57:19 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.237 16:57:19 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:46.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.237 16:57:19 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.237 16:57:19 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:46.237 16:57:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:46.237 [2024-12-05 16:57:19.782727] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:10:46.238 [2024-12-05 16:57:19.782860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67177 ] 00:10:46.238 [2024-12-05 16:57:19.944684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.238 [2024-12-05 16:57:20.076863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.500 16:57:20 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.500 16:57:20 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:46.500 16:57:20 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:46.500 16:57:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.500 16:57:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:46.500 16:57:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.500 16:57:20 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:46.500 16:57:20 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:46.500 16:57:20 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:46.500 16:57:20 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:46.500 16:57:20 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:46.500 16:57:20 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:46.500 16:57:20 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:46.500 16:57:20 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:46.500 16:57:20 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:46.500 16:57:20 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:46.500 16:57:20 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:46.500 16:57:20 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:46.500 16:57:20 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:53.130 16:57:26 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:53.130 16:57:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:53.130 16:57:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:53.130 16:57:26 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:53.130 16:57:26 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:53.130 16:57:26 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:53.130 16:57:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:53.130 16:57:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:53.130 16:57:26 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:53.130 16:57:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:53.130 16:57:26 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:53.130 16:57:26 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.130 16:57:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:53.130 16:57:26 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.130 16:57:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:53.130 16:57:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:53.130 [2024-12-05 16:57:26.873791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:53.130 [2024-12-05 16:57:26.875087] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.130 [2024-12-05 16:57:26.875119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.130 [2024-12-05 16:57:26.875132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.130 [2024-12-05 16:57:26.875148] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.130 [2024-12-05 16:57:26.875156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.130 [2024-12-05 16:57:26.875164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.130 [2024-12-05 16:57:26.875171] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.130 [2024-12-05 16:57:26.875179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.130 [2024-12-05 16:57:26.875185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.130 [2024-12-05 16:57:26.875196] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.130 [2024-12-05 16:57:26.875202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.130 [2024-12-05 16:57:26.875210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.130 16:57:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:53.130 16:57:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:53.130 16:57:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:53.130 16:57:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:53.131 16:57:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:53.131 16:57:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:53.131 16:57:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.131 16:57:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:53.131 16:57:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.131 16:57:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:53.131 16:57:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:53.391 [2024-12-05 16:57:27.573788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:53.391 [2024-12-05 16:57:27.574977] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.391 [2024-12-05 16:57:27.575003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.391 [2024-12-05 16:57:27.575014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.391 [2024-12-05 16:57:27.575027] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.391 [2024-12-05 16:57:27.575036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.391 [2024-12-05 16:57:27.575042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.391 [2024-12-05 16:57:27.575051] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.391 [2024-12-05 16:57:27.575057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.391 [2024-12-05 16:57:27.575065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.391 [2024-12-05 16:57:27.575072] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:53.391 [2024-12-05 16:57:27.575080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:53.391 [2024-12-05 16:57:27.575086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:53.651 16:57:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:53.651 16:57:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:53.651 16:57:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:53.651 16:57:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:53.651 16:57:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:53.651 16:57:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:53.651 16:57:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.651 16:57:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:53.651 16:57:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.651 16:57:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:53.651 16:57:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:53.651 16:57:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:53.651 16:57:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:53.651 16:57:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:53.913 16:57:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:53.913 16:57:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:53.913 16:57:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:53.913 16:57:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:53.913 16:57:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:53.913 16:57:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:53.913 16:57:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:53.913 16:57:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:06.158 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:06.158 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:06.158 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:06.158 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:06.159 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:06.159 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:06.159 16:57:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.159 16:57:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.159 16:57:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.159 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:06.159 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:06.159 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:06.159 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:06.159 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:06.159 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:06.159 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:06.159 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:06.159 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:06.159 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:06.159 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:06.159 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:06.159 16:57:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.159 16:57:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.159 [2024-12-05 16:57:40.273977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:06.159 [2024-12-05 16:57:40.275258] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.159 [2024-12-05 16:57:40.275355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.159 [2024-12-05 16:57:40.275425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.159 [2024-12-05 16:57:40.275479] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.159 [2024-12-05 16:57:40.275497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.159 [2024-12-05 16:57:40.275549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.159 [2024-12-05 16:57:40.275600] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.159 [2024-12-05 16:57:40.275618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.159 [2024-12-05 16:57:40.275640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.159 [2024-12-05 16:57:40.275735] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.159 [2024-12-05 16:57:40.275754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.159 [2024-12-05 16:57:40.275778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.159 16:57:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.159 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:06.159 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:06.420 [2024-12-05 16:57:40.773971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:06.420 [2024-12-05 16:57:40.775184] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.420 [2024-12-05 16:57:40.775281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.420 [2024-12-05 16:57:40.775344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.420 [2024-12-05 16:57:40.775400] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.420 [2024-12-05 16:57:40.775420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.420 [2024-12-05 16:57:40.775468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.420 [2024-12-05 16:57:40.775494] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.420 [2024-12-05 16:57:40.775511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.420 [2024-12-05 16:57:40.775686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.420 [2024-12-05 16:57:40.775709] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:06.420 [2024-12-05 16:57:40.775726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:06.420 [2024-12-05 16:57:40.775748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:06.681 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:06.681 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:06.681 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:06.681 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:06.681 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:06.681 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:06.681 16:57:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.681 16:57:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.681 16:57:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.681 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:06.681 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:06.681 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:06.681 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:06.681 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:06.681 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:06.681 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:06.681 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:06.681 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:06.681 16:57:40 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:06.943 16:57:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:06.943 16:57:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:06.943 16:57:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:19.172 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:19.172 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:19.172 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:19.172 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.172 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.172 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.172 16:57:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.172 16:57:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.172 16:57:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.172 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:19.172 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:19.172 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:19.172 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:19.172 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:19.172 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:19.172 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:19.172 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:19.172 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:19.172 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.172 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.172 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.172 16:57:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.172 16:57:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.172 16:57:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.172 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:19.172 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:19.172 [2024-12-05 16:57:53.174173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:19.172 [2024-12-05 16:57:53.175421] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.172 [2024-12-05 16:57:53.175453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.172 [2024-12-05 16:57:53.175464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.172 [2024-12-05 16:57:53.175480] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.172 [2024-12-05 16:57:53.175487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.172 [2024-12-05 16:57:53.175498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.172 [2024-12-05 16:57:53.175505] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.172 [2024-12-05 16:57:53.175513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.172 [2024-12-05 16:57:53.175519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.172 [2024-12-05 16:57:53.175527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.172 [2024-12-05 16:57:53.175534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.172 [2024-12-05 16:57:53.175541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.431 [2024-12-05 16:57:53.574168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:19.431 [2024-12-05 16:57:53.575278] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.431 [2024-12-05 16:57:53.575306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.431 [2024-12-05 16:57:53.575317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.431 [2024-12-05 16:57:53.575328] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.431 [2024-12-05 16:57:53.575337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.431 [2024-12-05 16:57:53.575343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.431 [2024-12-05 16:57:53.575353] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.431 [2024-12-05 16:57:53.575360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.431 [2024-12-05 16:57:53.575369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.431 [2024-12-05 16:57:53.575376] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.431 [2024-12-05 16:57:53.575384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.431 [2024-12-05 16:57:53.575390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.431 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:19.431 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:19.431 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:19.431 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.431 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.431 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.431 16:57:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.431 16:57:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.431 16:57:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.431 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:19.431 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:19.431 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:19.431 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:19.431 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:19.690 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:19.690 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:19.690 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:19.690 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:19.690 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:19.690 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:19.690 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:19.690 16:57:53 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:31.910 16:58:05 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:31.910 16:58:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:31.910 16:58:05 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:31.910 16:58:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:31.910 16:58:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:31.910 16:58:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:31.910 16:58:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.910 16:58:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:31.910 16:58:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.910 16:58:05 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:31.910 16:58:05 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:31.910 16:58:05 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.16 00:11:31.910 16:58:05 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.16 00:11:31.910 16:58:05 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:31.910 16:58:05 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.16 00:11:31.910 16:58:05 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.16 2 00:11:31.910 remove_attach_helper took 45.16s to complete (handling 2 nvme drive(s)) 16:58:05 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:31.910 16:58:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.910 16:58:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:31.910 16:58:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.910 16:58:05 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:31.910 16:58:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.910 16:58:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:31.910 16:58:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.910 16:58:05 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:31.911 16:58:05 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:31.911 16:58:05 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:31.911 16:58:05 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:31.911 16:58:05 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:31.911 16:58:05 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:31.911 16:58:05 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:31.911 16:58:05 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:31.911 16:58:05 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:31.911 16:58:05 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:31.911 16:58:05 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:31.911 16:58:05 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:31.911 16:58:05 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:38.496 16:58:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:38.496 16:58:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:38.496 16:58:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:38.496 16:58:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:38.496 16:58:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:38.496 16:58:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:38.496 16:58:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:38.496 16:58:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:38.496 16:58:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:38.496 16:58:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:38.496 16:58:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:38.496 16:58:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.496 16:58:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:38.496 16:58:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.496 16:58:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:38.496 16:58:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:38.496 [2024-12-05 16:58:12.065890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:38.496 [2024-12-05 16:58:12.067143] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.496 [2024-12-05 16:58:12.067248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.496 [2024-12-05 16:58:12.067308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.496 [2024-12-05 16:58:12.067437] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.496 [2024-12-05 16:58:12.067456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.496 [2024-12-05 16:58:12.067482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.496 [2024-12-05 16:58:12.067505] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.496 [2024-12-05 16:58:12.067558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.496 [2024-12-05 16:58:12.067585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.496 [2024-12-05 16:58:12.067611] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.496 [2024-12-05 16:58:12.067627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.496 [2024-12-05 16:58:12.067683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.496 16:58:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:38.496 16:58:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:38.496 16:58:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:38.496 16:58:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:38.496 16:58:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:38.496 16:58:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:38.496 16:58:12 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.496 16:58:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:38.496 16:58:12 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.496 16:58:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:38.496 16:58:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:38.496 [2024-12-05 16:58:12.665922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:38.496 [2024-12-05 16:58:12.666796] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.496 [2024-12-05 16:58:12.666825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.496 [2024-12-05 16:58:12.666837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.496 [2024-12-05 16:58:12.666850] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.496 [2024-12-05 16:58:12.666858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.496 [2024-12-05 16:58:12.666865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.496 [2024-12-05 16:58:12.666874] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.496 [2024-12-05 16:58:12.666881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.496 [2024-12-05 16:58:12.666889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.496 [2024-12-05 16:58:12.666896] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.496 [2024-12-05 16:58:12.666904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.496 [2024-12-05 16:58:12.666910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.758 16:58:13 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:38.758 16:58:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:38.758 16:58:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:38.758 16:58:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:38.759 16:58:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:38.759 16:58:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:38.759 16:58:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.759 16:58:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:38.759 16:58:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.020 16:58:13 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:39.020 16:58:13 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:39.020 16:58:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:39.020 16:58:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:39.020 16:58:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:39.020 16:58:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:39.020 16:58:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:39.020 16:58:13 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:39.020 16:58:13 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:39.020 16:58:13 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:39.020 16:58:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:39.020 16:58:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:39.020 16:58:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:51.269 16:58:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:51.269 16:58:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:51.269 16:58:25 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:51.269 16:58:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:51.269 16:58:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:51.269 16:58:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:51.269 16:58:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.269 16:58:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.269 16:58:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.269 16:58:25 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:51.269 16:58:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:51.269 16:58:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:51.269 16:58:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:51.269 16:58:25 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:51.269 16:58:25 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:51.269 16:58:25 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:51.269 16:58:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:51.269 16:58:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:51.269 16:58:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:51.269 16:58:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.269 16:58:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:51.269 16:58:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:51.269 16:58:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.269 [2024-12-05 16:58:25.466137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:51.269 16:58:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.269 [2024-12-05 16:58:25.467143] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.269 [2024-12-05 16:58:25.467248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.269 [2024-12-05 16:58:25.467305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.269 [2024-12-05 16:58:25.467374] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.269 [2024-12-05 16:58:25.467393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.269 [2024-12-05 16:58:25.467418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.269 [2024-12-05 16:58:25.467475] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.269 [2024-12-05 16:58:25.467494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.269 [2024-12-05 16:58:25.467642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.269 [2024-12-05 16:58:25.467672] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.269 [2024-12-05 16:58:25.467712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.269 [2024-12-05 16:58:25.467743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.269 16:58:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:51.269 16:58:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:51.840 [2024-12-05 16:58:25.966138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:51.840 [2024-12-05 16:58:25.967118] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.840 [2024-12-05 16:58:25.967218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.840 [2024-12-05 16:58:25.967282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.840 [2024-12-05 16:58:25.967337] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.840 [2024-12-05 16:58:25.967359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.840 [2024-12-05 16:58:25.967413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.840 [2024-12-05 16:58:25.967487] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.840 [2024-12-05 16:58:25.967504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.840 [2024-12-05 16:58:25.967556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.840 [2024-12-05 16:58:25.967583] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.840 [2024-12-05 16:58:25.967602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.840 [2024-12-05 16:58:25.967625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.840 16:58:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:51.840 16:58:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:51.840 16:58:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:51.840 16:58:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:51.840 16:58:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:51.840 16:58:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:51.840 16:58:25 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.840 16:58:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.840 16:58:25 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.840 16:58:26 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:51.841 16:58:26 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:51.841 16:58:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:51.841 16:58:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:51.841 16:58:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:51.841 16:58:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:51.841 16:58:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:51.841 16:58:26 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:51.841 16:58:26 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:51.841 16:58:26 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:52.102 16:58:26 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:52.102 16:58:26 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:52.102 16:58:26 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:04.334 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:04.334 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:04.334 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:04.334 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:04.334 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:04.334 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:04.334 16:58:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.334 16:58:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:04.334 16:58:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.334 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:04.334 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:04.334 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:04.334 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:04.334 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:04.334 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:04.334 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:04.334 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:04.334 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:04.334 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:04.334 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:04.334 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:04.334 16:58:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.334 16:58:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:04.334 16:58:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.334 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:04.334 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:04.334 [2024-12-05 16:58:38.366320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:04.334 [2024-12-05 16:58:38.368794] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.334 [2024-12-05 16:58:38.368904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.334 [2024-12-05 16:58:38.368977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.334 [2024-12-05 16:58:38.369037] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.334 [2024-12-05 16:58:38.369057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.334 [2024-12-05 16:58:38.369112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.334 [2024-12-05 16:58:38.369140] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.334 [2024-12-05 16:58:38.369159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.334 [2024-12-05 16:58:38.369207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.334 [2024-12-05 16:58:38.369235] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.334 [2024-12-05 16:58:38.369282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.334 [2024-12-05 16:58:38.369313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.594 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:04.594 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:04.594 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:04.594 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:04.594 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:04.594 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:04.594 16:58:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.594 16:58:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:04.594 [2024-12-05 16:58:38.866315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:04.594 [2024-12-05 16:58:38.867161] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.594 [2024-12-05 16:58:38.867184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.594 [2024-12-05 16:58:38.867196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.594 [2024-12-05 16:58:38.867208] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.594 [2024-12-05 16:58:38.867216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.594 [2024-12-05 16:58:38.867223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.594 [2024-12-05 16:58:38.867232] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.594 [2024-12-05 16:58:38.867239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.594 [2024-12-05 16:58:38.867246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.594 [2024-12-05 16:58:38.867253] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.594 [2024-12-05 16:58:38.867262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.595 [2024-12-05 16:58:38.867268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.595 16:58:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.595 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:04.595 16:58:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:05.165 16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:05.165 16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:05.165 16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:05.165 16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:05.165 16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:05.165 16:58:39 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.165 16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:05.165 16:58:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.165 16:58:39 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.165 16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:05.165 16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:05.165 16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:05.165 16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:05.165 16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:05.426 16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:05.426 16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:05.426 16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:05.426 16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:05.426 16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:05.426 16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:05.426 16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:05.426 16:58:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:17.686 16:58:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:17.686 16:58:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:17.686 16:58:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:17.686 16:58:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:17.686 16:58:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:17.686 16:58:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:17.686 16:58:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.686 16:58:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:17.686 16:58:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.686 16:58:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:17.686 16:58:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:17.686 16:58:51 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.73 00:12:17.686 16:58:51 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.73 00:12:17.686 16:58:51 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:17.686 16:58:51 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.73 00:12:17.686 16:58:51 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.73 2 00:12:17.686 remove_attach_helper took 45.73s to complete (handling 2 nvme drive(s)) 16:58:51 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:17.686 16:58:51 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67177 00:12:17.686 16:58:51 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67177 ']' 00:12:17.686 16:58:51 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67177 00:12:17.686 16:58:51 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:12:17.686 16:58:51 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:17.686 16:58:51 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67177 00:12:17.686 killing process with pid 67177 00:12:17.686 16:58:51 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:17.686 16:58:51 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:17.686 16:58:51 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67177' 00:12:17.686 16:58:51 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67177 00:12:17.686 16:58:51 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67177 00:12:18.632 16:58:52 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:18.893 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:19.466 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:19.466 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:19.466 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:19.467 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:19.467 ************************************ 00:12:19.467 END TEST sw_hotplug 00:12:19.467 ************************************ 00:12:19.467 00:12:19.467 real 2m30.690s 00:12:19.467 user 1m52.220s 00:12:19.467 sys 0m16.987s 00:12:19.467 16:58:53 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.467 16:58:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:19.730 16:58:53 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:19.730 16:58:53 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:19.730 16:58:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:19.730 16:58:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.730 16:58:53 -- common/autotest_common.sh@10 -- # set +x 00:12:19.730 ************************************ 00:12:19.730 START TEST nvme_xnvme 00:12:19.730 ************************************ 00:12:19.730 16:58:53 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:19.730 * Looking for test storage... 00:12:19.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:19.730 16:58:53 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:19.730 16:58:53 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:19.730 16:58:53 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:19.730 16:58:53 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:19.730 16:58:53 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.730 16:58:53 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.730 16:58:54 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.730 16:58:54 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.730 16:58:54 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.730 16:58:54 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.730 16:58:54 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.730 16:58:54 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.730 16:58:54 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.730 16:58:54 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.730 16:58:54 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.730 16:58:54 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:19.730 16:58:54 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:19.731 16:58:54 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.731 16:58:54 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.731 16:58:54 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:19.731 16:58:54 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:19.731 16:58:54 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.731 16:58:54 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:19.731 16:58:54 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.731 16:58:54 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:19.731 16:58:54 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:19.731 16:58:54 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.731 16:58:54 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:19.731 16:58:54 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.731 16:58:54 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.731 16:58:54 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.731 16:58:54 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:19.731 16:58:54 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.731 16:58:54 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:19.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.731 --rc genhtml_branch_coverage=1 00:12:19.731 --rc genhtml_function_coverage=1 00:12:19.731 --rc genhtml_legend=1 00:12:19.731 --rc geninfo_all_blocks=1 00:12:19.731 --rc geninfo_unexecuted_blocks=1 00:12:19.731 00:12:19.731 ' 00:12:19.731 16:58:54 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:19.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.731 --rc genhtml_branch_coverage=1 00:12:19.731 --rc genhtml_function_coverage=1 00:12:19.731 --rc genhtml_legend=1 00:12:19.731 --rc geninfo_all_blocks=1 00:12:19.731 --rc geninfo_unexecuted_blocks=1 00:12:19.731 00:12:19.731 ' 00:12:19.731 16:58:54 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:19.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.731 --rc genhtml_branch_coverage=1 00:12:19.731 --rc genhtml_function_coverage=1 00:12:19.731 --rc genhtml_legend=1 00:12:19.731 --rc geninfo_all_blocks=1 00:12:19.731 --rc geninfo_unexecuted_blocks=1 00:12:19.731 00:12:19.731 ' 00:12:19.731 16:58:54 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:19.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.731 --rc genhtml_branch_coverage=1 00:12:19.731 --rc genhtml_function_coverage=1 00:12:19.731 --rc genhtml_legend=1 00:12:19.731 --rc geninfo_all_blocks=1 00:12:19.731 --rc geninfo_unexecuted_blocks=1 00:12:19.731 00:12:19.731 ' 00:12:19.731 16:58:54 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:12:19.731 16:58:54 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:12:19.731 16:58:54 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:19.731 16:58:54 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:12:19.731 16:58:54 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:19.731 16:58:54 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:19.731 16:58:54 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:19.731 16:58:54 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:12:19.731 16:58:54 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:12:19.731 16:58:54 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:19.731 16:58:54 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:19.732 16:58:54 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:19.732 16:58:54 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:19.732 16:58:54 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:12:19.732 16:58:54 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:12:19.732 16:58:54 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:12:19.732 16:58:54 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:12:19.732 16:58:54 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:12:19.732 16:58:54 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:12:19.732 16:58:54 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:12:19.732 16:58:54 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:19.732 16:58:54 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:19.732 16:58:54 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:19.732 16:58:54 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:19.732 16:58:54 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:19.732 16:58:54 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:19.732 16:58:54 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:12:19.732 16:58:54 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:19.732 #define SPDK_CONFIG_H 00:12:19.732 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:19.732 #define SPDK_CONFIG_APPS 1 00:12:19.732 #define SPDK_CONFIG_ARCH native 00:12:19.732 #define SPDK_CONFIG_ASAN 1 00:12:19.732 #undef SPDK_CONFIG_AVAHI 00:12:19.732 #undef SPDK_CONFIG_CET 00:12:19.732 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:19.732 #define SPDK_CONFIG_COVERAGE 1 00:12:19.732 #define SPDK_CONFIG_CROSS_PREFIX 00:12:19.732 #undef SPDK_CONFIG_CRYPTO 00:12:19.732 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:19.732 #undef SPDK_CONFIG_CUSTOMOCF 00:12:19.732 #undef SPDK_CONFIG_DAOS 00:12:19.732 #define SPDK_CONFIG_DAOS_DIR 00:12:19.732 #define SPDK_CONFIG_DEBUG 1 00:12:19.732 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:19.732 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:19.732 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:19.732 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:19.732 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:19.732 #undef SPDK_CONFIG_DPDK_UADK 00:12:19.732 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:19.732 #define SPDK_CONFIG_EXAMPLES 1 00:12:19.732 #undef SPDK_CONFIG_FC 00:12:19.732 #define SPDK_CONFIG_FC_PATH 00:12:19.732 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:19.732 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:19.732 #define SPDK_CONFIG_FSDEV 1 00:12:19.732 #undef SPDK_CONFIG_FUSE 00:12:19.732 #undef SPDK_CONFIG_FUZZER 00:12:19.732 #define SPDK_CONFIG_FUZZER_LIB 00:12:19.732 #undef SPDK_CONFIG_GOLANG 00:12:19.732 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:19.732 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:19.732 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:19.732 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:19.732 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:19.732 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:19.732 #undef SPDK_CONFIG_HAVE_LZ4 00:12:19.732 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:19.732 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:19.732 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:19.732 #define SPDK_CONFIG_IDXD 1 00:12:19.732 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:19.732 #undef SPDK_CONFIG_IPSEC_MB 00:12:19.732 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:19.732 #define SPDK_CONFIG_ISAL 1 00:12:19.732 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:19.732 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:19.732 #define SPDK_CONFIG_LIBDIR 00:12:19.732 #undef SPDK_CONFIG_LTO 00:12:19.732 #define SPDK_CONFIG_MAX_LCORES 128 00:12:19.732 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:19.732 #define SPDK_CONFIG_NVME_CUSE 1 00:12:19.732 #undef SPDK_CONFIG_OCF 00:12:19.732 #define SPDK_CONFIG_OCF_PATH 00:12:19.732 #define SPDK_CONFIG_OPENSSL_PATH 00:12:19.732 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:19.732 #define SPDK_CONFIG_PGO_DIR 00:12:19.732 #undef SPDK_CONFIG_PGO_USE 00:12:19.733 #define SPDK_CONFIG_PREFIX /usr/local 00:12:19.733 #undef SPDK_CONFIG_RAID5F 00:12:19.733 #undef SPDK_CONFIG_RBD 00:12:19.733 #define SPDK_CONFIG_RDMA 1 00:12:19.733 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:19.733 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:19.733 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:19.733 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:19.733 #define SPDK_CONFIG_SHARED 1 00:12:19.733 #undef SPDK_CONFIG_SMA 00:12:19.733 #define SPDK_CONFIG_TESTS 1 00:12:19.733 #undef SPDK_CONFIG_TSAN 00:12:19.733 #define SPDK_CONFIG_UBLK 1 00:12:19.733 #define SPDK_CONFIG_UBSAN 1 00:12:19.733 #undef SPDK_CONFIG_UNIT_TESTS 00:12:19.733 #undef SPDK_CONFIG_URING 00:12:19.733 #define SPDK_CONFIG_URING_PATH 00:12:19.733 #undef SPDK_CONFIG_URING_ZNS 00:12:19.733 #undef SPDK_CONFIG_USDT 00:12:19.733 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:19.733 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:19.733 #undef SPDK_CONFIG_VFIO_USER 00:12:19.733 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:19.733 #define SPDK_CONFIG_VHOST 1 00:12:19.733 #define SPDK_CONFIG_VIRTIO 1 00:12:19.733 #undef SPDK_CONFIG_VTUNE 00:12:19.733 #define SPDK_CONFIG_VTUNE_DIR 00:12:19.733 #define SPDK_CONFIG_WERROR 1 00:12:19.733 #define SPDK_CONFIG_WPDK_DIR 00:12:19.733 #define SPDK_CONFIG_XNVME 1 00:12:19.733 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:19.733 16:58:54 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:19.733 16:58:54 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:19.733 16:58:54 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.733 16:58:54 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.733 16:58:54 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.733 16:58:54 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.733 16:58:54 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.733 16:58:54 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.733 16:58:54 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:19.733 16:58:54 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:19.733 16:58:54 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:12:19.733 16:58:54 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:19.733 16:58:54 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:12:19.733 16:58:54 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:12:19.733 16:58:54 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:12:19.733 16:58:54 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:12:19.733 16:58:54 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:12:19.733 16:58:54 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:12:19.733 16:58:54 nvme_xnvme -- pm/common@68 -- # uname -s 00:12:19.733 16:58:54 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:12:19.733 16:58:54 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:19.733 16:58:54 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:19.733 16:58:54 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:19.733 16:58:54 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:19.733 16:58:54 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:19.733 16:58:54 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:19.733 16:58:54 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:12:19.733 16:58:54 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:19.733 16:58:54 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:19.733 16:58:54 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:19.733 16:58:54 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:19.733 16:58:54 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:12:19.733 16:58:54 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:12:19.733 16:58:54 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:19.734 16:58:54 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68548 ]] 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68548 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.Wi7hdL 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:19.735 16:58:54 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.Wi7hdL/tests/xnvme /tmp/spdk.Wi7hdL 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13953138688 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5615009792 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260621312 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265384960 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493358080 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506153984 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13953138688 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5615009792 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265233408 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265384960 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98835120128 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=867659776 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:19.999 16:58:54 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:19.999 * Looking for test storage... 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13953138688 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:20.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:20.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.000 --rc genhtml_branch_coverage=1 00:12:20.000 --rc genhtml_function_coverage=1 00:12:20.000 --rc genhtml_legend=1 00:12:20.000 --rc geninfo_all_blocks=1 00:12:20.000 --rc geninfo_unexecuted_blocks=1 00:12:20.000 00:12:20.000 ' 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:20.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.000 --rc genhtml_branch_coverage=1 00:12:20.000 --rc genhtml_function_coverage=1 00:12:20.000 --rc genhtml_legend=1 00:12:20.000 --rc geninfo_all_blocks=1 00:12:20.000 --rc geninfo_unexecuted_blocks=1 00:12:20.000 00:12:20.000 ' 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:20.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.000 --rc genhtml_branch_coverage=1 00:12:20.000 --rc genhtml_function_coverage=1 00:12:20.000 --rc genhtml_legend=1 00:12:20.000 --rc geninfo_all_blocks=1 00:12:20.000 --rc geninfo_unexecuted_blocks=1 00:12:20.000 00:12:20.000 ' 00:12:20.000 16:58:54 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:20.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.000 --rc genhtml_branch_coverage=1 00:12:20.000 --rc genhtml_function_coverage=1 00:12:20.000 --rc genhtml_legend=1 00:12:20.000 --rc geninfo_all_blocks=1 00:12:20.000 --rc geninfo_unexecuted_blocks=1 00:12:20.000 00:12:20.000 ' 00:12:20.000 16:58:54 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.000 16:58:54 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.000 16:58:54 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.001 16:58:54 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.001 16:58:54 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.001 16:58:54 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:20.001 16:58:54 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.001 16:58:54 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:12:20.001 16:58:54 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:12:20.001 16:58:54 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:12:20.001 16:58:54 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:12:20.001 16:58:54 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:12:20.001 16:58:54 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:12:20.001 16:58:54 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:12:20.001 16:58:54 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:12:20.001 16:58:54 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:12:20.001 16:58:54 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:12:20.001 16:58:54 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:12:20.001 16:58:54 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:12:20.001 16:58:54 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:12:20.001 16:58:54 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:12:20.001 16:58:54 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:12:20.001 16:58:54 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:12:20.001 16:58:54 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:12:20.001 16:58:54 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:12:20.001 16:58:54 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:12:20.001 16:58:54 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:12:20.001 16:58:54 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:12:20.001 16:58:54 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:20.294 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:20.648 Waiting for block devices as requested 00:12:20.648 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:20.648 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:20.648 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:20.648 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:25.942 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:25.942 16:59:00 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:12:26.203 16:59:00 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:12:26.203 16:59:00 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:12:26.464 16:59:00 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:12:26.464 16:59:00 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:12:26.464 16:59:00 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:12:26.464 16:59:00 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:12:26.464 16:59:00 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:12:26.464 No valid GPT data, bailing 00:12:26.464 16:59:00 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:12:26.464 16:59:00 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:12:26.464 16:59:00 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:12:26.464 16:59:00 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:12:26.464 16:59:00 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:12:26.464 16:59:00 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:12:26.464 16:59:00 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:12:26.464 16:59:00 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:12:26.464 16:59:00 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:26.464 16:59:00 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:26.464 16:59:00 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:26.464 16:59:00 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:26.464 16:59:00 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:26.464 16:59:00 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:26.464 16:59:00 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:26.464 16:59:00 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:26.464 16:59:00 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:26.464 16:59:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:26.464 16:59:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.464 16:59:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:26.464 ************************************ 00:12:26.464 START TEST xnvme_rpc 00:12:26.464 ************************************ 00:12:26.464 16:59:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:26.464 16:59:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:26.464 16:59:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:26.464 16:59:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:26.464 16:59:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:26.464 16:59:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=68938 00:12:26.464 16:59:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 68938 00:12:26.464 16:59:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 68938 ']' 00:12:26.464 16:59:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.464 16:59:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.464 16:59:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.464 16:59:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.464 16:59:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.464 16:59:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:26.464 [2024-12-05 16:59:00.787591] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:12:26.464 [2024-12-05 16:59:00.787746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68938 ] 00:12:26.725 [2024-12-05 16:59:00.950664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.725 [2024-12-05 16:59:01.071459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.668 xnvme_bdev 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 68938 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 68938 ']' 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 68938 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:27.668 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.669 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68938 00:12:27.669 killing process with pid 68938 00:12:27.669 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:27.669 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:27.669 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68938' 00:12:27.669 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 68938 00:12:27.669 16:59:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 68938 00:12:29.581 ************************************ 00:12:29.581 END TEST xnvme_rpc 00:12:29.581 ************************************ 00:12:29.581 00:12:29.581 real 0m2.893s 00:12:29.581 user 0m2.886s 00:12:29.581 sys 0m0.483s 00:12:29.581 16:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.581 16:59:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.581 16:59:03 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:29.581 16:59:03 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:29.581 16:59:03 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.581 16:59:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:29.581 ************************************ 00:12:29.581 START TEST xnvme_bdevperf 00:12:29.581 ************************************ 00:12:29.581 16:59:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:29.581 16:59:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:29.581 16:59:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:29.581 16:59:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:29.581 16:59:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:29.581 16:59:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:29.581 16:59:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:29.581 16:59:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:29.581 { 00:12:29.581 "subsystems": [ 00:12:29.581 { 00:12:29.581 "subsystem": "bdev", 00:12:29.581 "config": [ 00:12:29.581 { 00:12:29.581 "params": { 00:12:29.581 "io_mechanism": "libaio", 00:12:29.581 "conserve_cpu": false, 00:12:29.581 "filename": "/dev/nvme0n1", 00:12:29.581 "name": "xnvme_bdev" 00:12:29.581 }, 00:12:29.581 "method": "bdev_xnvme_create" 00:12:29.581 }, 00:12:29.581 { 00:12:29.581 "method": "bdev_wait_for_examine" 00:12:29.581 } 00:12:29.581 ] 00:12:29.581 } 00:12:29.581 ] 00:12:29.581 } 00:12:29.581 [2024-12-05 16:59:03.738660] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:12:29.581 [2024-12-05 16:59:03.739104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69006 ] 00:12:29.581 [2024-12-05 16:59:03.903162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.842 [2024-12-05 16:59:04.022255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.104 Running I/O for 5 seconds... 00:12:31.989 27745.00 IOPS, 108.38 MiB/s [2024-12-05T16:59:07.744Z] 27838.50 IOPS, 108.74 MiB/s [2024-12-05T16:59:08.688Z] 27626.00 IOPS, 107.91 MiB/s [2024-12-05T16:59:09.633Z] 27594.50 IOPS, 107.79 MiB/s 00:12:35.266 Latency(us) 00:12:35.266 [2024-12-05T16:59:09.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:35.266 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:35.266 xnvme_bdev : 5.00 27703.16 108.22 0.00 0.00 2305.02 485.22 8872.57 00:12:35.266 [2024-12-05T16:59:09.633Z] =================================================================================================================== 00:12:35.266 [2024-12-05T16:59:09.633Z] Total : 27703.16 108.22 0.00 0.00 2305.02 485.22 8872.57 00:12:35.837 16:59:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:35.837 16:59:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:35.837 16:59:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:35.837 16:59:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:35.837 16:59:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:35.837 { 00:12:35.837 "subsystems": [ 00:12:35.837 { 00:12:35.837 "subsystem": "bdev", 00:12:35.837 "config": [ 00:12:35.837 { 00:12:35.837 "params": { 00:12:35.837 "io_mechanism": "libaio", 00:12:35.837 "conserve_cpu": false, 00:12:35.837 "filename": "/dev/nvme0n1", 00:12:35.837 "name": "xnvme_bdev" 00:12:35.837 }, 00:12:35.837 "method": "bdev_xnvme_create" 00:12:35.837 }, 00:12:35.837 { 00:12:35.837 "method": "bdev_wait_for_examine" 00:12:35.837 } 00:12:35.837 ] 00:12:35.837 } 00:12:35.837 ] 00:12:35.837 } 00:12:36.098 [2024-12-05 16:59:10.231832] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:12:36.098 [2024-12-05 16:59:10.232537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69088 ] 00:12:36.098 [2024-12-05 16:59:10.398057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.360 [2024-12-05 16:59:10.517182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.621 Running I/O for 5 seconds... 00:12:38.510 32625.00 IOPS, 127.44 MiB/s [2024-12-05T16:59:14.264Z] 33558.50 IOPS, 131.09 MiB/s [2024-12-05T16:59:14.837Z] 23650.67 IOPS, 92.39 MiB/s [2024-12-05T16:59:16.222Z] 18266.25 IOPS, 71.35 MiB/s [2024-12-05T16:59:16.222Z] 15023.00 IOPS, 58.68 MiB/s 00:12:41.855 Latency(us) 00:12:41.855 [2024-12-05T16:59:16.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.855 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:41.855 xnvme_bdev : 5.04 14926.37 58.31 0.00 0.00 4271.29 88.62 44766.13 00:12:41.855 [2024-12-05T16:59:16.222Z] =================================================================================================================== 00:12:41.855 [2024-12-05T16:59:16.222Z] Total : 14926.37 58.31 0.00 0.00 4271.29 88.62 44766.13 00:12:42.428 00:12:42.428 real 0m12.991s 00:12:42.428 user 0m7.190s 00:12:42.428 sys 0m4.544s 00:12:42.428 16:59:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.428 ************************************ 00:12:42.428 END TEST xnvme_bdevperf 00:12:42.428 16:59:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:42.428 ************************************ 00:12:42.428 16:59:16 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:42.428 16:59:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:42.428 16:59:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.428 16:59:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:42.428 ************************************ 00:12:42.428 START TEST xnvme_fio_plugin 00:12:42.428 ************************************ 00:12:42.428 16:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:42.428 16:59:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:42.428 16:59:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:42.428 16:59:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:42.428 16:59:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:42.428 16:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:42.428 16:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:42.428 16:59:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:42.428 16:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:42.428 16:59:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:42.428 16:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:42.428 16:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:42.428 16:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:42.428 16:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:42.428 16:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:42.428 16:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:42.428 16:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:42.428 16:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:42.428 16:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:42.428 16:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:42.428 16:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:42.428 16:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:42.428 16:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:42.428 16:59:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:42.428 { 00:12:42.428 "subsystems": [ 00:12:42.428 { 00:12:42.428 "subsystem": "bdev", 00:12:42.428 "config": [ 00:12:42.428 { 00:12:42.428 "params": { 00:12:42.428 "io_mechanism": "libaio", 00:12:42.428 "conserve_cpu": false, 00:12:42.428 "filename": "/dev/nvme0n1", 00:12:42.429 "name": "xnvme_bdev" 00:12:42.429 }, 00:12:42.429 "method": "bdev_xnvme_create" 00:12:42.429 }, 00:12:42.429 { 00:12:42.429 "method": "bdev_wait_for_examine" 00:12:42.429 } 00:12:42.429 ] 00:12:42.429 } 00:12:42.429 ] 00:12:42.429 } 00:12:42.687 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:42.687 fio-3.35 00:12:42.687 Starting 1 thread 00:12:49.271 00:12:49.271 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69202: Thu Dec 5 16:59:22 2024 00:12:49.271 read: IOPS=35.9k, BW=140MiB/s (147MB/s)(702MiB/5003msec) 00:12:49.271 slat (usec): min=4, max=1723, avg=20.62, stdev=80.13 00:12:49.271 clat (usec): min=27, max=12849, avg=1284.65, stdev=640.12 00:12:49.271 lat (usec): min=80, max=12854, avg=1305.27, stdev=637.13 00:12:49.271 clat percentiles (usec): 00:12:49.271 | 1.00th=[ 253], 5.00th=[ 457], 10.00th=[ 603], 20.00th=[ 799], 00:12:49.271 | 30.00th=[ 938], 40.00th=[ 1074], 50.00th=[ 1205], 60.00th=[ 1336], 00:12:49.271 | 70.00th=[ 1483], 80.00th=[ 1696], 90.00th=[ 2040], 95.00th=[ 2409], 00:12:49.271 | 99.00th=[ 3228], 99.50th=[ 3621], 99.90th=[ 6390], 99.95th=[ 8094], 00:12:49.271 | 99.99th=[ 8848] 00:12:49.271 bw ( KiB/s): min=129776, max=168632, per=100.00%, avg=145650.67, stdev=11373.21, samples=9 00:12:49.271 iops : min=32444, max=42158, avg=36412.67, stdev=2843.30, samples=9 00:12:49.271 lat (usec) : 50=0.01%, 100=0.02%, 250=0.95%, 500=5.30%, 750=10.98% 00:12:49.271 lat (usec) : 1000=17.42% 00:12:49.271 lat (msec) : 2=54.50%, 4=10.51%, 10=0.32%, 20=0.01% 00:12:49.271 cpu : usr=37.98%, sys=50.44%, ctx=17, majf=0, minf=764 00:12:49.271 IO depths : 1=0.2%, 2=0.6%, 4=1.8%, 8=6.1%, 16=21.4%, 32=67.4%, >=64=2.5% 00:12:49.271 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:49.271 complete : 0=0.0%, 4=97.7%, 8=0.1%, 16=0.2%, 32=0.4%, 64=1.6%, >=64=0.0% 00:12:49.271 issued rwts: total=179701,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:49.271 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:49.271 00:12:49.271 Run status group 0 (all jobs): 00:12:49.271 READ: bw=140MiB/s (147MB/s), 140MiB/s-140MiB/s (147MB/s-147MB/s), io=702MiB (736MB), run=5003-5003msec 00:12:49.271 ----------------------------------------------------- 00:12:49.271 Suppressions used: 00:12:49.271 count bytes template 00:12:49.271 1 11 /usr/src/fio/parse.c 00:12:49.271 1 8 libtcmalloc_minimal.so 00:12:49.271 1 904 libcrypto.so 00:12:49.271 ----------------------------------------------------- 00:12:49.271 00:12:49.271 16:59:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:49.271 16:59:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:49.271 16:59:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:49.271 16:59:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:49.271 16:59:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:49.271 16:59:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:49.271 16:59:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:49.271 16:59:23 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:49.271 16:59:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:49.271 16:59:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:49.271 16:59:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:49.271 16:59:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:49.271 16:59:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:49.271 16:59:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:49.271 16:59:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:49.271 16:59:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:49.271 16:59:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:49.271 16:59:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:49.271 16:59:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:49.271 16:59:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:49.271 16:59:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:49.532 { 00:12:49.532 "subsystems": [ 00:12:49.532 { 00:12:49.532 "subsystem": "bdev", 00:12:49.532 "config": [ 00:12:49.532 { 00:12:49.532 "params": { 00:12:49.532 "io_mechanism": "libaio", 00:12:49.532 "conserve_cpu": false, 00:12:49.532 "filename": "/dev/nvme0n1", 00:12:49.532 "name": "xnvme_bdev" 00:12:49.532 }, 00:12:49.532 "method": "bdev_xnvme_create" 00:12:49.532 }, 00:12:49.532 { 00:12:49.532 "method": "bdev_wait_for_examine" 00:12:49.532 } 00:12:49.532 ] 00:12:49.532 } 00:12:49.532 ] 00:12:49.532 } 00:12:49.532 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:49.532 fio-3.35 00:12:49.532 Starting 1 thread 00:12:56.127 00:12:56.127 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69299: Thu Dec 5 16:59:29 2024 00:12:56.127 write: IOPS=14.7k, BW=57.5MiB/s (60.3MB/s)(288MiB/5008msec); 0 zone resets 00:12:56.127 slat (usec): min=4, max=1884, avg=15.66, stdev=50.21 00:12:56.127 clat (usec): min=6, max=22073, avg=4208.51, stdev=5251.05 00:12:56.127 lat (usec): min=58, max=22092, avg=4224.17, stdev=5249.18 00:12:56.127 clat percentiles (usec): 00:12:56.127 | 1.00th=[ 105], 5.00th=[ 200], 10.00th=[ 302], 20.00th=[ 457], 00:12:56.127 | 30.00th=[ 586], 40.00th=[ 668], 50.00th=[ 766], 60.00th=[ 1057], 00:12:56.127 | 70.00th=[ 8455], 80.00th=[11076], 90.00th=[12649], 95.00th=[13566], 00:12:56.127 | 99.00th=[15139], 99.50th=[15795], 99.90th=[19268], 99.95th=[20317], 00:12:56.127 | 99.99th=[21890] 00:12:56.127 bw ( KiB/s): min=52280, max=66648, per=100.00%, avg=58946.40, stdev=4033.89, samples=10 00:12:56.127 iops : min=13070, max=16662, avg=14736.60, stdev=1008.47, samples=10 00:12:56.127 lat (usec) : 10=0.01%, 20=0.05%, 50=0.15%, 100=0.62%, 250=6.20% 00:12:56.127 lat (usec) : 500=15.69%, 750=25.84%, 1000=10.38% 00:12:56.127 lat (msec) : 2=7.26%, 4=1.13%, 10=6.92%, 20=25.70%, 50=0.07% 00:12:56.127 cpu : usr=75.85%, sys=13.10%, ctx=43, majf=0, minf=765 00:12:56.127 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=1.0%, 32=83.4%, >=64=15.5% 00:12:56.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:56.127 complete : 0=0.0%, 4=93.7%, 8=2.8%, 16=2.5%, 32=1.0%, 64=0.1%, >=64=0.0% 00:12:56.127 issued rwts: total=0,73745,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:56.127 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:56.127 00:12:56.127 Run status group 0 (all jobs): 00:12:56.127 WRITE: bw=57.5MiB/s (60.3MB/s), 57.5MiB/s-57.5MiB/s (60.3MB/s-60.3MB/s), io=288MiB (302MB), run=5008-5008msec 00:12:56.388 ----------------------------------------------------- 00:12:56.388 Suppressions used: 00:12:56.388 count bytes template 00:12:56.388 1 11 /usr/src/fio/parse.c 00:12:56.388 1 8 libtcmalloc_minimal.so 00:12:56.388 1 904 libcrypto.so 00:12:56.388 ----------------------------------------------------- 00:12:56.388 00:12:56.388 00:12:56.388 real 0m13.814s 00:12:56.388 user 0m8.508s 00:12:56.388 sys 0m3.786s 00:12:56.388 16:59:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:56.388 ************************************ 00:12:56.388 END TEST xnvme_fio_plugin 00:12:56.388 ************************************ 00:12:56.388 16:59:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:56.388 16:59:30 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:56.388 16:59:30 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:56.388 16:59:30 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:56.388 16:59:30 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:56.388 16:59:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:56.388 16:59:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:56.388 16:59:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:56.388 ************************************ 00:12:56.388 START TEST xnvme_rpc 00:12:56.388 ************************************ 00:12:56.388 16:59:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:56.388 16:59:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:56.388 16:59:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:56.388 16:59:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:56.388 16:59:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:56.388 16:59:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69381 00:12:56.388 16:59:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69381 00:12:56.388 16:59:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69381 ']' 00:12:56.388 16:59:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.388 16:59:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:56.388 16:59:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.388 16:59:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:56.388 16:59:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:56.388 16:59:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:56.388 [2024-12-05 16:59:30.701100] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:12:56.388 [2024-12-05 16:59:30.701241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69381 ] 00:12:56.650 [2024-12-05 16:59:30.864092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:56.650 [2024-12-05 16:59:30.983127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.646 xnvme_bdev 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69381 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69381 ']' 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69381 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69381 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:57.646 killing process with pid 69381 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69381' 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69381 00:12:57.646 16:59:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69381 00:12:59.570 00:12:59.570 real 0m2.871s 00:12:59.570 user 0m2.854s 00:12:59.570 sys 0m0.470s 00:12:59.570 16:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:59.570 ************************************ 00:12:59.570 16:59:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.570 END TEST xnvme_rpc 00:12:59.570 ************************************ 00:12:59.570 16:59:33 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:59.570 16:59:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:59.570 16:59:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.570 16:59:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:59.570 ************************************ 00:12:59.570 START TEST xnvme_bdevperf 00:12:59.570 ************************************ 00:12:59.570 16:59:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:59.570 16:59:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:59.570 16:59:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:59.570 16:59:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:59.570 16:59:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:59.570 16:59:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:59.570 16:59:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:59.570 16:59:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:59.570 { 00:12:59.570 "subsystems": [ 00:12:59.570 { 00:12:59.570 "subsystem": "bdev", 00:12:59.570 "config": [ 00:12:59.570 { 00:12:59.570 "params": { 00:12:59.570 "io_mechanism": "libaio", 00:12:59.570 "conserve_cpu": true, 00:12:59.570 "filename": "/dev/nvme0n1", 00:12:59.570 "name": "xnvme_bdev" 00:12:59.570 }, 00:12:59.570 "method": "bdev_xnvme_create" 00:12:59.570 }, 00:12:59.570 { 00:12:59.570 "method": "bdev_wait_for_examine" 00:12:59.570 } 00:12:59.570 ] 00:12:59.570 } 00:12:59.570 ] 00:12:59.570 } 00:12:59.570 [2024-12-05 16:59:33.627768] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:12:59.570 [2024-12-05 16:59:33.627917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69455 ] 00:12:59.570 [2024-12-05 16:59:33.783914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:59.570 [2024-12-05 16:59:33.901018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.143 Running I/O for 5 seconds... 00:13:02.032 36033.00 IOPS, 140.75 MiB/s [2024-12-05T16:59:37.344Z] 33060.50 IOPS, 129.14 MiB/s [2024-12-05T16:59:38.289Z] 32013.33 IOPS, 125.05 MiB/s [2024-12-05T16:59:39.234Z] 32709.00 IOPS, 127.77 MiB/s [2024-12-05T16:59:39.234Z] 32833.60 IOPS, 128.26 MiB/s 00:13:04.867 Latency(us) 00:13:04.867 [2024-12-05T16:59:39.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:04.867 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:04.867 xnvme_bdev : 5.00 32813.36 128.18 0.00 0.00 1945.54 79.16 11292.36 00:13:04.867 [2024-12-05T16:59:39.234Z] =================================================================================================================== 00:13:04.867 [2024-12-05T16:59:39.234Z] Total : 32813.36 128.18 0.00 0.00 1945.54 79.16 11292.36 00:13:05.811 16:59:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:05.811 16:59:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:05.811 16:59:40 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:05.811 16:59:40 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:05.811 16:59:40 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:05.811 { 00:13:05.811 "subsystems": [ 00:13:05.811 { 00:13:05.811 "subsystem": "bdev", 00:13:05.811 "config": [ 00:13:05.811 { 00:13:05.811 "params": { 00:13:05.811 "io_mechanism": "libaio", 00:13:05.811 "conserve_cpu": true, 00:13:05.811 "filename": "/dev/nvme0n1", 00:13:05.811 "name": "xnvme_bdev" 00:13:05.811 }, 00:13:05.811 "method": "bdev_xnvme_create" 00:13:05.811 }, 00:13:05.811 { 00:13:05.811 "method": "bdev_wait_for_examine" 00:13:05.811 } 00:13:05.811 ] 00:13:05.811 } 00:13:05.811 ] 00:13:05.811 } 00:13:05.811 [2024-12-05 16:59:40.078533] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:13:05.811 [2024-12-05 16:59:40.078683] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69530 ] 00:13:06.073 [2024-12-05 16:59:40.250493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:06.073 [2024-12-05 16:59:40.368150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:06.334 Running I/O for 5 seconds... 00:13:08.665 3244.00 IOPS, 12.67 MiB/s [2024-12-05T16:59:43.976Z] 3326.00 IOPS, 12.99 MiB/s [2024-12-05T16:59:44.917Z] 3291.33 IOPS, 12.86 MiB/s [2024-12-05T16:59:45.858Z] 4339.75 IOPS, 16.95 MiB/s [2024-12-05T16:59:45.858Z] 10483.00 IOPS, 40.95 MiB/s 00:13:11.491 Latency(us) 00:13:11.491 [2024-12-05T16:59:45.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:11.491 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:11.491 xnvme_bdev : 5.00 10488.88 40.97 0.00 0.00 6097.17 61.83 41539.74 00:13:11.491 [2024-12-05T16:59:45.858Z] =================================================================================================================== 00:13:11.491 [2024-12-05T16:59:45.858Z] Total : 10488.88 40.97 0.00 0.00 6097.17 61.83 41539.74 00:13:12.431 00:13:12.431 real 0m12.967s 00:13:12.431 user 0m7.513s 00:13:12.431 sys 0m4.051s 00:13:12.431 16:59:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:12.431 16:59:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:12.431 ************************************ 00:13:12.431 END TEST xnvme_bdevperf 00:13:12.431 ************************************ 00:13:12.431 16:59:46 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:12.431 16:59:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:12.431 16:59:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:12.431 16:59:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:12.431 ************************************ 00:13:12.431 START TEST xnvme_fio_plugin 00:13:12.431 ************************************ 00:13:12.431 16:59:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:12.431 16:59:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:12.431 16:59:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:12.431 16:59:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:12.431 16:59:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:12.431 16:59:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:12.431 16:59:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:12.431 16:59:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:12.431 16:59:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:12.431 16:59:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:12.431 16:59:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:12.431 16:59:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:12.431 16:59:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:12.431 16:59:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:12.431 16:59:46 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:12.431 16:59:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:12.431 16:59:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:12.431 16:59:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:12.431 16:59:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:12.431 16:59:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:12.431 16:59:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:12.431 16:59:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:12.431 16:59:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:12.431 16:59:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:12.431 { 00:13:12.431 "subsystems": [ 00:13:12.431 { 00:13:12.431 "subsystem": "bdev", 00:13:12.431 "config": [ 00:13:12.431 { 00:13:12.431 "params": { 00:13:12.431 "io_mechanism": "libaio", 00:13:12.431 "conserve_cpu": true, 00:13:12.431 "filename": "/dev/nvme0n1", 00:13:12.431 "name": "xnvme_bdev" 00:13:12.431 }, 00:13:12.431 "method": "bdev_xnvme_create" 00:13:12.431 }, 00:13:12.431 { 00:13:12.431 "method": "bdev_wait_for_examine" 00:13:12.431 } 00:13:12.431 ] 00:13:12.431 } 00:13:12.431 ] 00:13:12.431 } 00:13:12.431 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:12.431 fio-3.35 00:13:12.431 Starting 1 thread 00:13:19.065 00:13:19.065 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69649: Thu Dec 5 16:59:52 2024 00:13:19.065 read: IOPS=34.4k, BW=134MiB/s (141MB/s)(672MiB/5001msec) 00:13:19.065 slat (usec): min=4, max=1739, avg=22.43, stdev=92.96 00:13:19.065 clat (usec): min=105, max=6110, avg=1265.50, stdev=506.94 00:13:19.065 lat (usec): min=197, max=6115, avg=1287.93, stdev=498.66 00:13:19.065 clat percentiles (usec): 00:13:19.065 | 1.00th=[ 277], 5.00th=[ 510], 10.00th=[ 652], 20.00th=[ 857], 00:13:19.065 | 30.00th=[ 1004], 40.00th=[ 1123], 50.00th=[ 1237], 60.00th=[ 1352], 00:13:19.065 | 70.00th=[ 1467], 80.00th=[ 1614], 90.00th=[ 1876], 95.00th=[ 2147], 00:13:19.065 | 99.00th=[ 2835], 99.50th=[ 3163], 99.90th=[ 3851], 99.95th=[ 4146], 00:13:19.065 | 99.99th=[ 4948] 00:13:19.065 bw ( KiB/s): min=131944, max=144328, per=100.00%, avg=138994.67, stdev=5149.93, samples=9 00:13:19.065 iops : min=32986, max=36082, avg=34748.67, stdev=1287.48, samples=9 00:13:19.065 lat (usec) : 250=0.72%, 500=4.04%, 750=9.51%, 1000=15.42% 00:13:19.065 lat (msec) : 2=63.24%, 4=7.01%, 10=0.07% 00:13:19.065 cpu : usr=36.22%, sys=54.84%, ctx=30, majf=0, minf=764 00:13:19.065 IO depths : 1=0.4%, 2=1.0%, 4=2.8%, 8=8.3%, 16=23.6%, 32=61.8%, >=64=2.1% 00:13:19.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:19.065 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.7%, >=64=0.0% 00:13:19.065 issued rwts: total=172109,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:19.065 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:19.065 00:13:19.065 Run status group 0 (all jobs): 00:13:19.065 READ: bw=134MiB/s (141MB/s), 134MiB/s-134MiB/s (141MB/s-141MB/s), io=672MiB (705MB), run=5001-5001msec 00:13:19.327 ----------------------------------------------------- 00:13:19.327 Suppressions used: 00:13:19.327 count bytes template 00:13:19.327 1 11 /usr/src/fio/parse.c 00:13:19.327 1 8 libtcmalloc_minimal.so 00:13:19.327 1 904 libcrypto.so 00:13:19.327 ----------------------------------------------------- 00:13:19.327 00:13:19.327 16:59:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:19.327 16:59:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:19.327 16:59:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:19.327 16:59:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:19.327 16:59:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:19.327 16:59:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:19.327 16:59:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:19.327 16:59:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:19.327 16:59:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:19.327 16:59:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:19.327 16:59:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:19.327 16:59:53 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:19.327 16:59:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:19.327 16:59:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:19.327 16:59:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:19.327 16:59:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:19.327 16:59:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:19.327 16:59:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:19.327 16:59:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:19.327 16:59:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:19.327 16:59:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:19.327 { 00:13:19.327 "subsystems": [ 00:13:19.327 { 00:13:19.327 "subsystem": "bdev", 00:13:19.327 "config": [ 00:13:19.327 { 00:13:19.327 "params": { 00:13:19.327 "io_mechanism": "libaio", 00:13:19.327 "conserve_cpu": true, 00:13:19.327 "filename": "/dev/nvme0n1", 00:13:19.327 "name": "xnvme_bdev" 00:13:19.327 }, 00:13:19.327 "method": "bdev_xnvme_create" 00:13:19.327 }, 00:13:19.327 { 00:13:19.327 "method": "bdev_wait_for_examine" 00:13:19.327 } 00:13:19.327 ] 00:13:19.327 } 00:13:19.327 ] 00:13:19.327 } 00:13:19.589 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:19.589 fio-3.35 00:13:19.589 Starting 1 thread 00:13:26.187 00:13:26.187 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69741: Thu Dec 5 16:59:59 2024 00:13:26.187 write: IOPS=24.3k, BW=95.0MiB/s (99.6MB/s)(476MiB/5008msec); 0 zone resets 00:13:26.187 slat (usec): min=4, max=5495, avg=20.00, stdev=76.97 00:13:26.187 clat (usec): min=6, max=19314, avg=2206.50, stdev=3268.20 00:13:26.187 lat (usec): min=61, max=19328, avg=2226.50, stdev=3265.58 00:13:26.187 clat percentiles (usec): 00:13:26.187 | 1.00th=[ 145], 5.00th=[ 314], 10.00th=[ 449], 20.00th=[ 635], 00:13:26.187 | 30.00th=[ 758], 40.00th=[ 930], 50.00th=[ 1090], 60.00th=[ 1270], 00:13:26.187 | 70.00th=[ 1450], 80.00th=[ 1745], 90.00th=[ 8586], 95.00th=[11207], 00:13:26.187 | 99.00th=[13566], 99.50th=[14484], 99.90th=[16712], 99.95th=[17695], 00:13:26.187 | 99.99th=[18744] 00:13:26.187 bw ( KiB/s): min=66048, max=145696, per=100.00%, avg=97383.20, stdev=37001.55, samples=10 00:13:26.187 iops : min=16512, max=36424, avg=24345.80, stdev=9250.39, samples=10 00:13:26.187 lat (usec) : 10=0.01%, 20=0.01%, 50=0.08%, 100=0.26%, 250=2.63% 00:13:26.187 lat (usec) : 500=9.07%, 750=17.16%, 1000=14.93% 00:13:26.187 lat (msec) : 2=39.68%, 4=4.22%, 10=4.22%, 20=7.74% 00:13:26.187 cpu : usr=56.86%, sys=32.10%, ctx=192, majf=0, minf=765 00:13:26.187 IO depths : 1=0.2%, 2=0.5%, 4=1.7%, 8=5.3%, 16=15.0%, 32=70.7%, >=64=6.6% 00:13:26.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:26.187 complete : 0=0.0%, 4=96.3%, 8=1.0%, 16=1.0%, 32=0.6%, 64=1.0%, >=64=0.0% 00:13:26.187 issued rwts: total=0,121786,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:26.187 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:26.187 00:13:26.187 Run status group 0 (all jobs): 00:13:26.187 WRITE: bw=95.0MiB/s (99.6MB/s), 95.0MiB/s-95.0MiB/s (99.6MB/s-99.6MB/s), io=476MiB (499MB), run=5008-5008msec 00:13:26.187 ----------------------------------------------------- 00:13:26.187 Suppressions used: 00:13:26.187 count bytes template 00:13:26.187 1 11 /usr/src/fio/parse.c 00:13:26.187 1 8 libtcmalloc_minimal.so 00:13:26.187 1 904 libcrypto.so 00:13:26.187 ----------------------------------------------------- 00:13:26.187 00:13:26.187 ************************************ 00:13:26.187 END TEST xnvme_fio_plugin 00:13:26.187 ************************************ 00:13:26.187 00:13:26.187 real 0m13.955s 00:13:26.187 user 0m7.560s 00:13:26.187 sys 0m4.995s 00:13:26.187 17:00:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:26.187 17:00:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:26.449 17:00:00 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:26.449 17:00:00 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:26.449 17:00:00 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:26.449 17:00:00 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:26.449 17:00:00 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:26.449 17:00:00 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:26.449 17:00:00 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:26.449 17:00:00 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:26.449 17:00:00 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:26.449 17:00:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:26.449 17:00:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:26.449 17:00:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:26.449 ************************************ 00:13:26.449 START TEST xnvme_rpc 00:13:26.449 ************************************ 00:13:26.449 17:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:26.449 17:00:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:26.449 17:00:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:26.449 17:00:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:26.449 17:00:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:26.449 17:00:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69827 00:13:26.449 17:00:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69827 00:13:26.449 17:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69827 ']' 00:13:26.449 17:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.449 17:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:26.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.449 17:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.449 17:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:26.449 17:00:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.449 17:00:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:26.449 [2024-12-05 17:00:00.715900] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:13:26.449 [2024-12-05 17:00:00.716076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69827 ] 00:13:26.710 [2024-12-05 17:00:00.880602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.710 [2024-12-05 17:00:01.003346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.650 xnvme_bdev 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69827 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69827 ']' 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69827 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69827 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:27.650 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:27.651 killing process with pid 69827 00:13:27.651 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69827' 00:13:27.651 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69827 00:13:27.651 17:00:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69827 00:13:29.563 00:13:29.563 real 0m2.915s 00:13:29.563 user 0m2.910s 00:13:29.563 sys 0m0.483s 00:13:29.563 ************************************ 00:13:29.563 END TEST xnvme_rpc 00:13:29.563 ************************************ 00:13:29.563 17:00:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.563 17:00:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.563 17:00:03 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:29.563 17:00:03 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:29.563 17:00:03 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.563 17:00:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:29.563 ************************************ 00:13:29.563 START TEST xnvme_bdevperf 00:13:29.563 ************************************ 00:13:29.563 17:00:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:29.563 17:00:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:29.563 17:00:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:29.563 17:00:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:29.563 17:00:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:29.563 17:00:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:29.563 17:00:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:29.563 17:00:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:29.563 { 00:13:29.563 "subsystems": [ 00:13:29.563 { 00:13:29.563 "subsystem": "bdev", 00:13:29.563 "config": [ 00:13:29.563 { 00:13:29.563 "params": { 00:13:29.563 "io_mechanism": "io_uring", 00:13:29.563 "conserve_cpu": false, 00:13:29.563 "filename": "/dev/nvme0n1", 00:13:29.563 "name": "xnvme_bdev" 00:13:29.563 }, 00:13:29.563 "method": "bdev_xnvme_create" 00:13:29.563 }, 00:13:29.563 { 00:13:29.563 "method": "bdev_wait_for_examine" 00:13:29.563 } 00:13:29.563 ] 00:13:29.564 } 00:13:29.564 ] 00:13:29.564 } 00:13:29.564 [2024-12-05 17:00:03.689491] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:13:29.564 [2024-12-05 17:00:03.689642] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69901 ] 00:13:29.564 [2024-12-05 17:00:03.856204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.826 [2024-12-05 17:00:03.979351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.087 Running I/O for 5 seconds... 00:13:31.971 34112.00 IOPS, 133.25 MiB/s [2024-12-05T17:00:07.280Z] 33684.00 IOPS, 131.58 MiB/s [2024-12-05T17:00:08.669Z] 33514.00 IOPS, 130.91 MiB/s [2024-12-05T17:00:09.613Z] 33353.75 IOPS, 130.29 MiB/s [2024-12-05T17:00:09.613Z] 33443.00 IOPS, 130.64 MiB/s 00:13:35.246 Latency(us) 00:13:35.246 [2024-12-05T17:00:09.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.246 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:35.246 xnvme_bdev : 5.00 33442.43 130.63 0.00 0.00 1909.40 258.36 12451.84 00:13:35.246 [2024-12-05T17:00:09.613Z] =================================================================================================================== 00:13:35.246 [2024-12-05T17:00:09.613Z] Total : 33442.43 130.63 0.00 0.00 1909.40 258.36 12451.84 00:13:35.819 17:00:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:35.819 17:00:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:35.819 17:00:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:35.819 17:00:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:35.819 17:00:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:35.819 { 00:13:35.819 "subsystems": [ 00:13:35.819 { 00:13:35.819 "subsystem": "bdev", 00:13:35.819 "config": [ 00:13:35.819 { 00:13:35.819 "params": { 00:13:35.819 "io_mechanism": "io_uring", 00:13:35.819 "conserve_cpu": false, 00:13:35.819 "filename": "/dev/nvme0n1", 00:13:35.819 "name": "xnvme_bdev" 00:13:35.819 }, 00:13:35.819 "method": "bdev_xnvme_create" 00:13:35.819 }, 00:13:35.819 { 00:13:35.819 "method": "bdev_wait_for_examine" 00:13:35.819 } 00:13:35.819 ] 00:13:35.819 } 00:13:35.819 ] 00:13:35.819 } 00:13:35.819 [2024-12-05 17:00:10.160368] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:13:35.819 [2024-12-05 17:00:10.160527] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69971 ] 00:13:36.080 [2024-12-05 17:00:10.324381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.080 [2024-12-05 17:00:10.445633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:36.653 Running I/O for 5 seconds... 00:13:38.658 4568.00 IOPS, 17.84 MiB/s [2024-12-05T17:00:13.967Z] 4485.00 IOPS, 17.52 MiB/s [2024-12-05T17:00:14.912Z] 4531.00 IOPS, 17.70 MiB/s [2024-12-05T17:00:15.857Z] 4491.25 IOPS, 17.54 MiB/s [2024-12-05T17:00:15.857Z] 4462.20 IOPS, 17.43 MiB/s 00:13:41.490 Latency(us) 00:13:41.490 [2024-12-05T17:00:15.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:41.490 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:41.490 xnvme_bdev : 5.01 4462.07 17.43 0.00 0.00 14319.13 69.71 37708.41 00:13:41.490 [2024-12-05T17:00:15.857Z] =================================================================================================================== 00:13:41.490 [2024-12-05T17:00:15.857Z] Total : 4462.07 17.43 0.00 0.00 14319.13 69.71 37708.41 00:13:42.432 00:13:42.432 real 0m12.947s 00:13:42.432 user 0m5.900s 00:13:42.432 sys 0m6.779s 00:13:42.432 17:00:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:42.432 ************************************ 00:13:42.432 END TEST xnvme_bdevperf 00:13:42.432 ************************************ 00:13:42.432 17:00:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:42.432 17:00:16 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:42.432 17:00:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:42.432 17:00:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:42.432 17:00:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:42.432 ************************************ 00:13:42.432 START TEST xnvme_fio_plugin 00:13:42.432 ************************************ 00:13:42.432 17:00:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:42.432 17:00:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:42.432 17:00:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:42.432 17:00:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:42.432 17:00:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:42.432 17:00:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:42.432 17:00:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:42.432 17:00:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:42.432 17:00:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:42.432 17:00:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:42.432 17:00:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:42.432 17:00:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:42.432 17:00:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:42.432 17:00:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:42.432 17:00:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:42.432 17:00:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:42.432 17:00:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:42.432 17:00:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:42.432 17:00:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:42.432 17:00:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:42.432 17:00:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:42.432 17:00:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:42.432 17:00:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:42.432 17:00:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:42.432 { 00:13:42.432 "subsystems": [ 00:13:42.432 { 00:13:42.432 "subsystem": "bdev", 00:13:42.432 "config": [ 00:13:42.432 { 00:13:42.432 "params": { 00:13:42.432 "io_mechanism": "io_uring", 00:13:42.432 "conserve_cpu": false, 00:13:42.432 "filename": "/dev/nvme0n1", 00:13:42.432 "name": "xnvme_bdev" 00:13:42.432 }, 00:13:42.432 "method": "bdev_xnvme_create" 00:13:42.432 }, 00:13:42.432 { 00:13:42.432 "method": "bdev_wait_for_examine" 00:13:42.432 } 00:13:42.432 ] 00:13:42.432 } 00:13:42.432 ] 00:13:42.432 } 00:13:42.692 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:42.692 fio-3.35 00:13:42.692 Starting 1 thread 00:13:49.283 00:13:49.283 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70090: Thu Dec 5 17:00:22 2024 00:13:49.283 read: IOPS=36.4k, BW=142MiB/s (149MB/s)(712MiB/5001msec) 00:13:49.283 slat (usec): min=2, max=103, avg= 4.05, stdev= 2.18 00:13:49.283 clat (usec): min=270, max=7781, avg=1591.47, stdev=239.44 00:13:49.283 lat (usec): min=280, max=7784, avg=1595.52, stdev=239.98 00:13:49.283 clat percentiles (usec): 00:13:49.283 | 1.00th=[ 1188], 5.00th=[ 1303], 10.00th=[ 1352], 20.00th=[ 1418], 00:13:49.283 | 30.00th=[ 1467], 40.00th=[ 1500], 50.00th=[ 1549], 60.00th=[ 1598], 00:13:49.283 | 70.00th=[ 1647], 80.00th=[ 1745], 90.00th=[ 1893], 95.00th=[ 2024], 00:13:49.283 | 99.00th=[ 2343], 99.50th=[ 2507], 99.90th=[ 3064], 99.95th=[ 3294], 00:13:49.283 | 99.99th=[ 5014] 00:13:49.283 bw ( KiB/s): min=141824, max=151040, per=99.96%, avg=145716.78, stdev=2677.40, samples=9 00:13:49.283 iops : min=35456, max=37760, avg=36429.11, stdev=669.34, samples=9 00:13:49.283 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:13:49.283 lat (msec) : 2=94.26%, 4=5.69%, 10=0.02% 00:13:49.283 cpu : usr=29.68%, sys=68.88%, ctx=13, majf=0, minf=762 00:13:49.283 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=25.0%, 32=50.2%, >=64=1.6% 00:13:49.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:49.283 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:49.283 issued rwts: total=182259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:49.283 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:49.283 00:13:49.283 Run status group 0 (all jobs): 00:13:49.283 READ: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=712MiB (747MB), run=5001-5001msec 00:13:49.283 ----------------------------------------------------- 00:13:49.283 Suppressions used: 00:13:49.283 count bytes template 00:13:49.283 1 11 /usr/src/fio/parse.c 00:13:49.283 1 8 libtcmalloc_minimal.so 00:13:49.283 1 904 libcrypto.so 00:13:49.283 ----------------------------------------------------- 00:13:49.283 00:13:49.283 17:00:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:49.283 17:00:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:49.283 17:00:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:49.283 17:00:23 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:49.283 17:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:49.283 17:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:49.283 17:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:49.283 17:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:49.283 17:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:49.283 17:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:49.283 17:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:49.283 17:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:49.283 17:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:49.283 17:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:49.283 17:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:49.283 17:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:49.283 17:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:49.283 17:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:49.284 17:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:49.284 17:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:49.284 17:00:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:49.284 { 00:13:49.284 "subsystems": [ 00:13:49.284 { 00:13:49.284 "subsystem": "bdev", 00:13:49.284 "config": [ 00:13:49.284 { 00:13:49.284 "params": { 00:13:49.284 "io_mechanism": "io_uring", 00:13:49.284 "conserve_cpu": false, 00:13:49.284 "filename": "/dev/nvme0n1", 00:13:49.284 "name": "xnvme_bdev" 00:13:49.284 }, 00:13:49.284 "method": "bdev_xnvme_create" 00:13:49.284 }, 00:13:49.284 { 00:13:49.284 "method": "bdev_wait_for_examine" 00:13:49.284 } 00:13:49.284 ] 00:13:49.284 } 00:13:49.284 ] 00:13:49.284 } 00:13:49.545 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:49.545 fio-3.35 00:13:49.545 Starting 1 thread 00:13:56.146 00:13:56.146 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70182: Thu Dec 5 17:00:29 2024 00:13:56.146 write: IOPS=30.1k, BW=117MiB/s (123MB/s)(588MiB/5009msec); 0 zone resets 00:13:56.146 slat (usec): min=2, max=283, avg= 4.21, stdev= 2.89 00:13:56.146 clat (usec): min=57, max=19624, avg=1983.82, stdev=2213.72 00:13:56.146 lat (usec): min=60, max=19628, avg=1988.04, stdev=2213.76 00:13:56.146 clat percentiles (usec): 00:13:56.146 | 1.00th=[ 371], 5.00th=[ 676], 10.00th=[ 955], 20.00th=[ 1270], 00:13:56.146 | 30.00th=[ 1369], 40.00th=[ 1434], 50.00th=[ 1500], 60.00th=[ 1582], 00:13:56.146 | 70.00th=[ 1663], 80.00th=[ 1795], 90.00th=[ 2040], 95.00th=[ 8291], 00:13:56.146 | 99.00th=[12387], 99.50th=[13173], 99.90th=[15008], 99.95th=[15664], 00:13:56.146 | 99.99th=[16712] 00:13:56.146 bw ( KiB/s): min=66376, max=165944, per=100.00%, avg=120393.60, stdev=39485.00, samples=10 00:13:56.146 iops : min=16594, max=41486, avg=30098.40, stdev=9871.25, samples=10 00:13:56.146 lat (usec) : 100=0.01%, 250=0.35%, 500=1.63%, 750=4.80%, 1000=3.60% 00:13:56.146 lat (msec) : 2=78.81%, 4=4.99%, 10=2.20%, 20=3.62% 00:13:56.146 cpu : usr=30.29%, sys=68.03%, ctx=23, majf=0, minf=763 00:13:56.146 IO depths : 1=1.2%, 2=2.5%, 4=5.0%, 8=10.0%, 16=20.4%, 32=57.1%, >=64=3.8% 00:13:56.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:56.146 complete : 0=0.0%, 4=97.6%, 8=0.5%, 16=0.4%, 32=0.3%, 64=1.3%, >=64=0.0% 00:13:56.146 issued rwts: total=0,150540,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:56.146 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:56.146 00:13:56.146 Run status group 0 (all jobs): 00:13:56.146 WRITE: bw=117MiB/s (123MB/s), 117MiB/s-117MiB/s (123MB/s-123MB/s), io=588MiB (617MB), run=5009-5009msec 00:13:56.146 ----------------------------------------------------- 00:13:56.146 Suppressions used: 00:13:56.146 count bytes template 00:13:56.146 1 11 /usr/src/fio/parse.c 00:13:56.146 1 8 libtcmalloc_minimal.so 00:13:56.146 1 904 libcrypto.so 00:13:56.146 ----------------------------------------------------- 00:13:56.146 00:13:56.146 00:13:56.146 real 0m13.829s 00:13:56.146 user 0m5.898s 00:13:56.146 sys 0m7.447s 00:13:56.146 17:00:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.146 ************************************ 00:13:56.146 END TEST xnvme_fio_plugin 00:13:56.146 ************************************ 00:13:56.146 17:00:30 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:56.146 17:00:30 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:56.146 17:00:30 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:56.146 17:00:30 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:56.146 17:00:30 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:56.146 17:00:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:56.146 17:00:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.146 17:00:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:56.408 ************************************ 00:13:56.408 START TEST xnvme_rpc 00:13:56.408 ************************************ 00:13:56.408 17:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:56.408 17:00:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:56.408 17:00:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:56.408 17:00:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:56.408 17:00:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:56.408 17:00:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70269 00:13:56.408 17:00:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70269 00:13:56.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.408 17:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70269 ']' 00:13:56.408 17:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.408 17:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:56.408 17:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.408 17:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:56.408 17:00:30 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.408 17:00:30 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:56.408 [2024-12-05 17:00:30.610288] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:13:56.408 [2024-12-05 17:00:30.610444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70269 ] 00:13:56.669 [2024-12-05 17:00:30.775352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.669 [2024-12-05 17:00:30.897169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.242 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:57.242 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:57.242 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:57.243 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.243 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.504 xnvme_bdev 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70269 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70269 ']' 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70269 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70269 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:57.504 killing process with pid 70269 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70269' 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70269 00:13:57.504 17:00:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70269 00:13:59.421 00:13:59.421 real 0m2.936s 00:13:59.421 user 0m2.926s 00:13:59.421 sys 0m0.484s 00:13:59.421 17:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:59.421 ************************************ 00:13:59.421 END TEST xnvme_rpc 00:13:59.421 ************************************ 00:13:59.421 17:00:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.421 17:00:33 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:59.421 17:00:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:59.421 17:00:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:59.421 17:00:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:59.421 ************************************ 00:13:59.421 START TEST xnvme_bdevperf 00:13:59.421 ************************************ 00:13:59.421 17:00:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:59.421 17:00:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:59.421 17:00:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:59.421 17:00:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:59.421 17:00:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:59.421 17:00:33 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:59.421 17:00:33 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:59.421 17:00:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:59.421 { 00:13:59.421 "subsystems": [ 00:13:59.421 { 00:13:59.421 "subsystem": "bdev", 00:13:59.421 "config": [ 00:13:59.421 { 00:13:59.421 "params": { 00:13:59.421 "io_mechanism": "io_uring", 00:13:59.421 "conserve_cpu": true, 00:13:59.421 "filename": "/dev/nvme0n1", 00:13:59.421 "name": "xnvme_bdev" 00:13:59.421 }, 00:13:59.421 "method": "bdev_xnvme_create" 00:13:59.421 }, 00:13:59.421 { 00:13:59.421 "method": "bdev_wait_for_examine" 00:13:59.421 } 00:13:59.421 ] 00:13:59.421 } 00:13:59.421 ] 00:13:59.421 } 00:13:59.421 [2024-12-05 17:00:33.605763] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:13:59.421 [2024-12-05 17:00:33.606353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70343 ] 00:13:59.421 [2024-12-05 17:00:33.771236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.682 [2024-12-05 17:00:33.890791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.943 Running I/O for 5 seconds... 00:14:01.828 37031.00 IOPS, 144.65 MiB/s [2024-12-05T17:00:37.578Z] 35376.50 IOPS, 138.19 MiB/s [2024-12-05T17:00:38.519Z] 34952.00 IOPS, 136.53 MiB/s [2024-12-05T17:00:39.459Z] 35017.25 IOPS, 136.79 MiB/s 00:14:05.092 Latency(us) 00:14:05.092 [2024-12-05T17:00:39.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.092 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:05.092 xnvme_bdev : 5.00 34871.69 136.22 0.00 0.00 1830.44 155.96 12351.02 00:14:05.092 [2024-12-05T17:00:39.459Z] =================================================================================================================== 00:14:05.092 [2024-12-05T17:00:39.459Z] Total : 34871.69 136.22 0.00 0.00 1830.44 155.96 12351.02 00:14:05.663 17:00:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:05.663 17:00:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:05.663 17:00:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:05.663 17:00:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:05.663 17:00:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:05.663 { 00:14:05.663 "subsystems": [ 00:14:05.663 { 00:14:05.663 "subsystem": "bdev", 00:14:05.663 "config": [ 00:14:05.663 { 00:14:05.663 "params": { 00:14:05.663 "io_mechanism": "io_uring", 00:14:05.663 "conserve_cpu": true, 00:14:05.663 "filename": "/dev/nvme0n1", 00:14:05.663 "name": "xnvme_bdev" 00:14:05.663 }, 00:14:05.663 "method": "bdev_xnvme_create" 00:14:05.663 }, 00:14:05.663 { 00:14:05.663 "method": "bdev_wait_for_examine" 00:14:05.663 } 00:14:05.663 ] 00:14:05.663 } 00:14:05.663 ] 00:14:05.663 } 00:14:05.924 [2024-12-05 17:00:40.060687] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:14:05.924 [2024-12-05 17:00:40.060833] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70418 ] 00:14:05.924 [2024-12-05 17:00:40.224783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.186 [2024-12-05 17:00:40.345071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.449 Running I/O for 5 seconds... 00:14:08.339 6704.00 IOPS, 26.19 MiB/s [2024-12-05T17:00:43.652Z] 6770.50 IOPS, 26.45 MiB/s [2024-12-05T17:00:45.053Z] 6667.67 IOPS, 26.05 MiB/s [2024-12-05T17:00:45.995Z] 6619.75 IOPS, 25.86 MiB/s [2024-12-05T17:00:45.995Z] 6612.60 IOPS, 25.83 MiB/s 00:14:11.628 Latency(us) 00:14:11.628 [2024-12-05T17:00:45.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:11.628 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:11.628 xnvme_bdev : 5.02 6606.49 25.81 0.00 0.00 9670.14 66.56 30852.33 00:14:11.628 [2024-12-05T17:00:45.995Z] =================================================================================================================== 00:14:11.628 [2024-12-05T17:00:45.995Z] Total : 6606.49 25.81 0.00 0.00 9670.14 66.56 30852.33 00:14:12.199 00:14:12.199 real 0m12.915s 00:14:12.199 user 0m8.944s 00:14:12.199 sys 0m3.062s 00:14:12.199 17:00:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:12.199 ************************************ 00:14:12.199 END TEST xnvme_bdevperf 00:14:12.199 ************************************ 00:14:12.199 17:00:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:12.199 17:00:46 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:12.199 17:00:46 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:12.199 17:00:46 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:12.199 17:00:46 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:12.199 ************************************ 00:14:12.199 START TEST xnvme_fio_plugin 00:14:12.199 ************************************ 00:14:12.199 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:12.199 17:00:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:12.199 17:00:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:12.199 17:00:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:12.199 17:00:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:12.199 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:12.199 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:12.199 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:12.199 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:12.199 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:12.199 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:12.199 17:00:46 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:12.199 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:12.199 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:12.199 17:00:46 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:12.199 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:12.199 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:12.199 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:12.199 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:12.199 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:12.199 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:12.199 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:12.199 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:12.199 17:00:46 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:12.199 { 00:14:12.199 "subsystems": [ 00:14:12.199 { 00:14:12.199 "subsystem": "bdev", 00:14:12.199 "config": [ 00:14:12.199 { 00:14:12.199 "params": { 00:14:12.199 "io_mechanism": "io_uring", 00:14:12.199 "conserve_cpu": true, 00:14:12.199 "filename": "/dev/nvme0n1", 00:14:12.199 "name": "xnvme_bdev" 00:14:12.199 }, 00:14:12.199 "method": "bdev_xnvme_create" 00:14:12.199 }, 00:14:12.199 { 00:14:12.199 "method": "bdev_wait_for_examine" 00:14:12.199 } 00:14:12.199 ] 00:14:12.199 } 00:14:12.199 ] 00:14:12.199 } 00:14:12.460 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:12.460 fio-3.35 00:14:12.460 Starting 1 thread 00:14:19.146 00:14:19.146 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70532: Thu Dec 5 17:00:52 2024 00:14:19.146 read: IOPS=36.1k, BW=141MiB/s (148MB/s)(705MiB/5001msec) 00:14:19.146 slat (nsec): min=2880, max=79135, avg=4067.52, stdev=2099.29 00:14:19.146 clat (usec): min=1140, max=3798, avg=1606.59, stdev=196.30 00:14:19.146 lat (usec): min=1143, max=3830, avg=1610.66, stdev=196.83 00:14:19.146 clat percentiles (usec): 00:14:19.146 | 1.00th=[ 1287], 5.00th=[ 1352], 10.00th=[ 1401], 20.00th=[ 1450], 00:14:19.146 | 30.00th=[ 1500], 40.00th=[ 1532], 50.00th=[ 1565], 60.00th=[ 1614], 00:14:19.146 | 70.00th=[ 1663], 80.00th=[ 1729], 90.00th=[ 1860], 95.00th=[ 1991], 00:14:19.146 | 99.00th=[ 2212], 99.50th=[ 2311], 99.90th=[ 2540], 99.95th=[ 2737], 00:14:19.146 | 99.99th=[ 3621] 00:14:19.146 bw ( KiB/s): min=142336, max=146944, per=100.00%, avg=144497.78, stdev=1420.23, samples=9 00:14:19.146 iops : min=35584, max=36736, avg=36124.44, stdev=355.06, samples=9 00:14:19.146 lat (msec) : 2=95.44%, 4=4.56% 00:14:19.146 cpu : usr=34.92%, sys=60.70%, ctx=10, majf=0, minf=762 00:14:19.146 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:19.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:19.146 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:19.146 issued rwts: total=180544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:19.146 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:19.146 00:14:19.146 Run status group 0 (all jobs): 00:14:19.146 READ: bw=141MiB/s (148MB/s), 141MiB/s-141MiB/s (148MB/s-148MB/s), io=705MiB (740MB), run=5001-5001msec 00:14:19.146 ----------------------------------------------------- 00:14:19.146 Suppressions used: 00:14:19.146 count bytes template 00:14:19.146 1 11 /usr/src/fio/parse.c 00:14:19.146 1 8 libtcmalloc_minimal.so 00:14:19.146 1 904 libcrypto.so 00:14:19.146 ----------------------------------------------------- 00:14:19.146 00:14:19.146 17:00:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:19.146 17:00:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:19.146 17:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:19.146 17:00:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:19.146 17:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:19.146 17:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:19.146 17:00:53 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:19.146 17:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:19.146 17:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:19.146 17:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:19.146 17:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:19.146 17:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:19.146 17:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:19.146 17:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:19.146 17:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:19.146 17:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:19.146 17:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:19.146 17:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:19.146 17:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:19.146 17:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:19.146 17:00:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:19.146 { 00:14:19.146 "subsystems": [ 00:14:19.146 { 00:14:19.146 "subsystem": "bdev", 00:14:19.146 "config": [ 00:14:19.146 { 00:14:19.146 "params": { 00:14:19.146 "io_mechanism": "io_uring", 00:14:19.146 "conserve_cpu": true, 00:14:19.146 "filename": "/dev/nvme0n1", 00:14:19.146 "name": "xnvme_bdev" 00:14:19.146 }, 00:14:19.146 "method": "bdev_xnvme_create" 00:14:19.146 }, 00:14:19.146 { 00:14:19.146 "method": "bdev_wait_for_examine" 00:14:19.146 } 00:14:19.146 ] 00:14:19.146 } 00:14:19.146 ] 00:14:19.146 } 00:14:19.407 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:19.407 fio-3.35 00:14:19.407 Starting 1 thread 00:14:25.989 00:14:25.989 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70629: Thu Dec 5 17:00:59 2024 00:14:25.989 write: IOPS=32.5k, BW=127MiB/s (133MB/s)(636MiB/5012msec); 0 zone resets 00:14:25.989 slat (usec): min=2, max=229, avg= 4.54, stdev= 2.71 00:14:25.989 clat (usec): min=91, max=21117, avg=1796.94, stdev=1552.85 00:14:25.989 lat (usec): min=95, max=21120, avg=1801.48, stdev=1552.96 00:14:25.989 clat percentiles (usec): 00:14:25.989 | 1.00th=[ 586], 5.00th=[ 1237], 10.00th=[ 1336], 20.00th=[ 1418], 00:14:25.989 | 30.00th=[ 1467], 40.00th=[ 1516], 50.00th=[ 1565], 60.00th=[ 1631], 00:14:25.989 | 70.00th=[ 1696], 80.00th=[ 1778], 90.00th=[ 1958], 95.00th=[ 2147], 00:14:25.989 | 99.00th=[11600], 99.50th=[13566], 99.90th=[16581], 99.95th=[17695], 00:14:25.989 | 99.99th=[19792] 00:14:25.989 bw ( KiB/s): min=60520, max=145344, per=100.00%, avg=130132.80, stdev=28398.99, samples=10 00:14:25.989 iops : min=15130, max=36336, avg=32533.20, stdev=7099.75, samples=10 00:14:25.989 lat (usec) : 100=0.01%, 250=0.07%, 500=0.57%, 750=1.75%, 1000=1.34% 00:14:25.989 lat (msec) : 2=87.93%, 4=6.11%, 10=0.61%, 20=1.61%, 50=0.01% 00:14:25.989 cpu : usr=45.18%, sys=49.29%, ctx=22, majf=0, minf=763 00:14:25.989 IO depths : 1=1.4%, 2=2.8%, 4=5.8%, 8=11.6%, 16=23.4%, 32=52.6%, >=64=2.3% 00:14:25.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.989 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.2%, 32=0.1%, 64=1.4%, >=64=0.0% 00:14:25.989 issued rwts: total=0,162729,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:25.989 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:25.989 00:14:25.989 Run status group 0 (all jobs): 00:14:25.989 WRITE: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=636MiB (667MB), run=5012-5012msec 00:14:25.989 ----------------------------------------------------- 00:14:25.989 Suppressions used: 00:14:25.989 count bytes template 00:14:25.989 1 11 /usr/src/fio/parse.c 00:14:25.989 1 8 libtcmalloc_minimal.so 00:14:25.989 1 904 libcrypto.so 00:14:25.989 ----------------------------------------------------- 00:14:25.989 00:14:25.989 00:14:25.989 real 0m13.791s 00:14:25.989 user 0m6.918s 00:14:25.989 sys 0m6.065s 00:14:25.989 ************************************ 00:14:25.989 END TEST xnvme_fio_plugin 00:14:25.989 ************************************ 00:14:25.989 17:01:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:25.989 17:01:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:26.249 17:01:00 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:26.249 17:01:00 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:14:26.249 17:01:00 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:14:26.249 17:01:00 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:14:26.249 17:01:00 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:26.249 17:01:00 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:26.249 17:01:00 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:26.249 17:01:00 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:26.249 17:01:00 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:26.249 17:01:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:26.249 17:01:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:26.249 17:01:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:26.249 ************************************ 00:14:26.249 START TEST xnvme_rpc 00:14:26.249 ************************************ 00:14:26.249 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:26.249 17:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:26.249 17:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:26.249 17:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:26.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:26.249 17:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:26.249 17:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70710 00:14:26.249 17:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70710 00:14:26.249 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70710 ']' 00:14:26.249 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:26.249 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:26.249 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:26.249 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:26.249 17:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:26.249 17:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:26.249 [2024-12-05 17:01:00.470376] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:14:26.249 [2024-12-05 17:01:00.470527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70710 ] 00:14:26.510 [2024-12-05 17:01:00.628346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.510 [2024-12-05 17:01:00.758361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.082 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:27.082 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:27.082 17:01:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:14:27.082 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.082 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.343 xnvme_bdev 00:14:27.343 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.343 17:01:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:27.343 17:01:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:27.343 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.343 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.343 17:01:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:27.343 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.343 17:01:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:27.343 17:01:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:27.343 17:01:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:27.343 17:01:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70710 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70710 ']' 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70710 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70710 00:14:27.344 killing process with pid 70710 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70710' 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70710 00:14:27.344 17:01:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70710 00:14:29.256 ************************************ 00:14:29.256 END TEST xnvme_rpc 00:14:29.256 ************************************ 00:14:29.256 00:14:29.256 real 0m2.898s 00:14:29.256 user 0m2.903s 00:14:29.256 sys 0m0.473s 00:14:29.256 17:01:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:29.256 17:01:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:29.256 17:01:03 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:29.256 17:01:03 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:29.256 17:01:03 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:29.256 17:01:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:29.256 ************************************ 00:14:29.256 START TEST xnvme_bdevperf 00:14:29.256 ************************************ 00:14:29.256 17:01:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:29.256 17:01:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:29.256 17:01:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:29.256 17:01:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:29.256 17:01:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:29.256 17:01:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:29.256 17:01:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:29.256 17:01:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:29.256 { 00:14:29.256 "subsystems": [ 00:14:29.256 { 00:14:29.256 "subsystem": "bdev", 00:14:29.256 "config": [ 00:14:29.256 { 00:14:29.256 "params": { 00:14:29.256 "io_mechanism": "io_uring_cmd", 00:14:29.256 "conserve_cpu": false, 00:14:29.256 "filename": "/dev/ng0n1", 00:14:29.256 "name": "xnvme_bdev" 00:14:29.256 }, 00:14:29.256 "method": "bdev_xnvme_create" 00:14:29.256 }, 00:14:29.256 { 00:14:29.256 "method": "bdev_wait_for_examine" 00:14:29.256 } 00:14:29.256 ] 00:14:29.256 } 00:14:29.256 ] 00:14:29.256 } 00:14:29.256 [2024-12-05 17:01:03.424823] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:14:29.256 [2024-12-05 17:01:03.424986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70784 ] 00:14:29.256 [2024-12-05 17:01:03.584690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.517 [2024-12-05 17:01:03.702648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.778 Running I/O for 5 seconds... 00:14:31.664 40960.00 IOPS, 160.00 MiB/s [2024-12-05T17:01:07.418Z] 37888.00 IOPS, 148.00 MiB/s [2024-12-05T17:01:08.361Z] 37439.67 IOPS, 146.25 MiB/s [2024-12-05T17:01:09.305Z] 38047.50 IOPS, 148.62 MiB/s [2024-12-05T17:01:09.305Z] 37849.20 IOPS, 147.85 MiB/s 00:14:34.938 Latency(us) 00:14:34.938 [2024-12-05T17:01:09.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:34.938 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:34.938 xnvme_bdev : 5.00 37825.68 147.76 0.00 0.00 1687.73 863.31 5394.12 00:14:34.938 [2024-12-05T17:01:09.305Z] =================================================================================================================== 00:14:34.938 [2024-12-05T17:01:09.305Z] Total : 37825.68 147.76 0.00 0.00 1687.73 863.31 5394.12 00:14:35.511 17:01:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:35.511 17:01:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:35.511 17:01:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:35.511 17:01:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:35.511 17:01:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:35.511 { 00:14:35.511 "subsystems": [ 00:14:35.511 { 00:14:35.511 "subsystem": "bdev", 00:14:35.511 "config": [ 00:14:35.511 { 00:14:35.511 "params": { 00:14:35.511 "io_mechanism": "io_uring_cmd", 00:14:35.511 "conserve_cpu": false, 00:14:35.511 "filename": "/dev/ng0n1", 00:14:35.511 "name": "xnvme_bdev" 00:14:35.511 }, 00:14:35.511 "method": "bdev_xnvme_create" 00:14:35.511 }, 00:14:35.511 { 00:14:35.511 "method": "bdev_wait_for_examine" 00:14:35.511 } 00:14:35.511 ] 00:14:35.511 } 00:14:35.511 ] 00:14:35.511 } 00:14:35.511 [2024-12-05 17:01:09.855437] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:14:35.511 [2024-12-05 17:01:09.855579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70858 ] 00:14:35.772 [2024-12-05 17:01:10.020242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.033 [2024-12-05 17:01:10.175313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.293 Running I/O for 5 seconds... 00:14:38.179 37663.00 IOPS, 147.12 MiB/s [2024-12-05T17:01:13.488Z] 37697.50 IOPS, 147.26 MiB/s [2024-12-05T17:01:14.872Z] 37862.33 IOPS, 147.90 MiB/s [2024-12-05T17:01:15.812Z] 37812.25 IOPS, 147.70 MiB/s [2024-12-05T17:01:15.812Z] 37727.80 IOPS, 147.37 MiB/s 00:14:41.445 Latency(us) 00:14:41.445 [2024-12-05T17:01:15.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.445 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:41.445 xnvme_bdev : 5.01 37698.71 147.26 0.00 0.00 1693.10 316.65 5520.15 00:14:41.445 [2024-12-05T17:01:15.812Z] =================================================================================================================== 00:14:41.445 [2024-12-05T17:01:15.812Z] Total : 37698.71 147.26 0.00 0.00 1693.10 316.65 5520.15 00:14:42.017 17:01:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:42.017 17:01:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:42.017 17:01:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:42.017 17:01:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:42.017 17:01:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:42.017 { 00:14:42.017 "subsystems": [ 00:14:42.017 { 00:14:42.017 "subsystem": "bdev", 00:14:42.017 "config": [ 00:14:42.017 { 00:14:42.017 "params": { 00:14:42.017 "io_mechanism": "io_uring_cmd", 00:14:42.017 "conserve_cpu": false, 00:14:42.017 "filename": "/dev/ng0n1", 00:14:42.017 "name": "xnvme_bdev" 00:14:42.017 }, 00:14:42.017 "method": "bdev_xnvme_create" 00:14:42.017 }, 00:14:42.017 { 00:14:42.017 "method": "bdev_wait_for_examine" 00:14:42.017 } 00:14:42.017 ] 00:14:42.017 } 00:14:42.017 ] 00:14:42.017 } 00:14:42.017 [2024-12-05 17:01:16.328432] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:14:42.017 [2024-12-05 17:01:16.328791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70940 ] 00:14:42.278 [2024-12-05 17:01:16.494127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.278 [2024-12-05 17:01:16.611329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.539 Running I/O for 5 seconds... 00:14:44.866 79872.00 IOPS, 312.00 MiB/s [2024-12-05T17:01:20.175Z] 79712.00 IOPS, 311.38 MiB/s [2024-12-05T17:01:21.118Z] 79317.33 IOPS, 309.83 MiB/s [2024-12-05T17:01:22.060Z] 79936.00 IOPS, 312.25 MiB/s 00:14:47.693 Latency(us) 00:14:47.693 [2024-12-05T17:01:22.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:47.693 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:47.693 xnvme_bdev : 5.00 80143.37 313.06 0.00 0.00 795.19 513.58 2457.60 00:14:47.693 [2024-12-05T17:01:22.060Z] =================================================================================================================== 00:14:47.693 [2024-12-05T17:01:22.060Z] Total : 80143.37 313.06 0.00 0.00 795.19 513.58 2457.60 00:14:48.263 17:01:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:48.263 17:01:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:48.263 17:01:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:48.263 17:01:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:48.263 17:01:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:48.263 { 00:14:48.263 "subsystems": [ 00:14:48.263 { 00:14:48.263 "subsystem": "bdev", 00:14:48.263 "config": [ 00:14:48.263 { 00:14:48.263 "params": { 00:14:48.263 "io_mechanism": "io_uring_cmd", 00:14:48.263 "conserve_cpu": false, 00:14:48.263 "filename": "/dev/ng0n1", 00:14:48.263 "name": "xnvme_bdev" 00:14:48.263 }, 00:14:48.263 "method": "bdev_xnvme_create" 00:14:48.263 }, 00:14:48.263 { 00:14:48.263 "method": "bdev_wait_for_examine" 00:14:48.263 } 00:14:48.263 ] 00:14:48.263 } 00:14:48.263 ] 00:14:48.263 } 00:14:48.263 [2024-12-05 17:01:22.627631] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:14:48.263 [2024-12-05 17:01:22.627745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71009 ] 00:14:48.522 [2024-12-05 17:01:22.783041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.522 [2024-12-05 17:01:22.860639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:48.782 Running I/O for 5 seconds... 00:14:51.109 266.00 IOPS, 1.04 MiB/s [2024-12-05T17:01:26.415Z] 213.00 IOPS, 0.83 MiB/s [2024-12-05T17:01:27.430Z] 217.33 IOPS, 0.85 MiB/s [2024-12-05T17:01:28.372Z] 239.50 IOPS, 0.94 MiB/s [2024-12-05T17:01:28.372Z] 256.40 IOPS, 1.00 MiB/s 00:14:54.005 Latency(us) 00:14:54.005 [2024-12-05T17:01:28.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.005 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:54.005 xnvme_bdev : 5.23 257.50 1.01 0.00 0.00 242851.27 437.96 922746.88 00:14:54.005 [2024-12-05T17:01:28.372Z] =================================================================================================================== 00:14:54.005 [2024-12-05T17:01:28.372Z] Total : 257.50 1.01 0.00 0.00 242851.27 437.96 922746.88 00:14:54.576 00:14:54.576 real 0m25.477s 00:14:54.576 user 0m14.060s 00:14:54.576 sys 0m10.920s 00:14:54.576 17:01:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:54.576 ************************************ 00:14:54.576 END TEST xnvme_bdevperf 00:14:54.576 ************************************ 00:14:54.576 17:01:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:54.576 17:01:28 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:54.576 17:01:28 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:54.576 17:01:28 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:54.576 17:01:28 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:54.576 ************************************ 00:14:54.576 START TEST xnvme_fio_plugin 00:14:54.576 ************************************ 00:14:54.576 17:01:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:54.576 17:01:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:54.576 17:01:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:54.576 17:01:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:54.576 17:01:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:54.576 17:01:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:54.576 17:01:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:54.576 17:01:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:54.576 17:01:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:54.576 17:01:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:54.576 17:01:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:54.576 17:01:28 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:54.576 17:01:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:54.576 17:01:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:54.576 17:01:28 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:54.576 17:01:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:54.576 17:01:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:54.576 17:01:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:54.576 17:01:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:54.576 17:01:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:54.576 17:01:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:54.576 17:01:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:54.576 17:01:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:54.576 17:01:28 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:54.576 { 00:14:54.576 "subsystems": [ 00:14:54.576 { 00:14:54.576 "subsystem": "bdev", 00:14:54.576 "config": [ 00:14:54.576 { 00:14:54.576 "params": { 00:14:54.576 "io_mechanism": "io_uring_cmd", 00:14:54.576 "conserve_cpu": false, 00:14:54.576 "filename": "/dev/ng0n1", 00:14:54.576 "name": "xnvme_bdev" 00:14:54.576 }, 00:14:54.576 "method": "bdev_xnvme_create" 00:14:54.576 }, 00:14:54.576 { 00:14:54.576 "method": "bdev_wait_for_examine" 00:14:54.576 } 00:14:54.576 ] 00:14:54.576 } 00:14:54.576 ] 00:14:54.576 } 00:14:54.837 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:54.837 fio-3.35 00:14:54.837 Starting 1 thread 00:15:01.419 00:15:01.419 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71127: Thu Dec 5 17:01:34 2024 00:15:01.419 read: IOPS=39.0k, BW=152MiB/s (160MB/s)(762MiB/5003msec) 00:15:01.419 slat (nsec): min=2881, max=90969, avg=3802.63, stdev=1847.35 00:15:01.419 clat (usec): min=266, max=9050, avg=1490.85, stdev=284.42 00:15:01.419 lat (usec): min=277, max=9053, avg=1494.65, stdev=284.79 00:15:01.419 clat percentiles (usec): 00:15:01.419 | 1.00th=[ 971], 5.00th=[ 1074], 10.00th=[ 1156], 20.00th=[ 1270], 00:15:01.419 | 30.00th=[ 1369], 40.00th=[ 1434], 50.00th=[ 1483], 60.00th=[ 1532], 00:15:01.419 | 70.00th=[ 1598], 80.00th=[ 1663], 90.00th=[ 1811], 95.00th=[ 1942], 00:15:01.419 | 99.00th=[ 2278], 99.50th=[ 2540], 99.90th=[ 3523], 99.95th=[ 4146], 00:15:01.419 | 99.99th=[ 5669] 00:15:01.419 bw ( KiB/s): min=143360, max=184832, per=100.00%, avg=157866.67, stdev=13497.78, samples=9 00:15:01.419 iops : min=35840, max=46208, avg=39466.67, stdev=3374.45, samples=9 00:15:01.419 lat (usec) : 500=0.01%, 750=0.01%, 1000=1.74% 00:15:01.419 lat (msec) : 2=94.62%, 4=3.58%, 10=0.05% 00:15:01.419 cpu : usr=33.85%, sys=64.87%, ctx=16, majf=0, minf=762 00:15:01.419 IO depths : 1=1.5%, 2=3.0%, 4=6.1%, 8=12.4%, 16=25.0%, 32=50.4%, >=64=1.6% 00:15:01.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:01.419 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:01.419 issued rwts: total=194960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:01.419 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:01.419 00:15:01.419 Run status group 0 (all jobs): 00:15:01.419 READ: bw=152MiB/s (160MB/s), 152MiB/s-152MiB/s (160MB/s-160MB/s), io=762MiB (799MB), run=5003-5003msec 00:15:01.419 ----------------------------------------------------- 00:15:01.419 Suppressions used: 00:15:01.419 count bytes template 00:15:01.419 1 11 /usr/src/fio/parse.c 00:15:01.419 1 8 libtcmalloc_minimal.so 00:15:01.419 1 904 libcrypto.so 00:15:01.419 ----------------------------------------------------- 00:15:01.419 00:15:01.420 17:01:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:01.420 17:01:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:01.420 17:01:35 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:01.420 17:01:35 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:01.420 17:01:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:01.420 17:01:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:01.420 17:01:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:01.420 17:01:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:01.420 17:01:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:01.420 17:01:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:01.420 17:01:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:01.420 17:01:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:01.420 17:01:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:01.420 17:01:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:01.420 17:01:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:01.420 17:01:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:01.420 17:01:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:01.420 17:01:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:01.420 17:01:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:01.420 17:01:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:01.420 17:01:35 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:01.420 { 00:15:01.420 "subsystems": [ 00:15:01.420 { 00:15:01.420 "subsystem": "bdev", 00:15:01.420 "config": [ 00:15:01.420 { 00:15:01.420 "params": { 00:15:01.420 "io_mechanism": "io_uring_cmd", 00:15:01.420 "conserve_cpu": false, 00:15:01.420 "filename": "/dev/ng0n1", 00:15:01.420 "name": "xnvme_bdev" 00:15:01.420 }, 00:15:01.420 "method": "bdev_xnvme_create" 00:15:01.420 }, 00:15:01.420 { 00:15:01.420 "method": "bdev_wait_for_examine" 00:15:01.420 } 00:15:01.420 ] 00:15:01.420 } 00:15:01.420 ] 00:15:01.420 } 00:15:01.680 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:01.680 fio-3.35 00:15:01.680 Starting 1 thread 00:15:08.275 00:15:08.275 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71212: Thu Dec 5 17:01:41 2024 00:15:08.275 write: IOPS=34.7k, BW=135MiB/s (142MB/s)(678MiB/5008msec); 0 zone resets 00:15:08.275 slat (usec): min=2, max=222, avg= 4.02, stdev= 2.40 00:15:08.275 clat (usec): min=62, max=18832, avg=1703.48, stdev=1486.69 00:15:08.275 lat (usec): min=65, max=18835, avg=1707.50, stdev=1486.76 00:15:08.275 clat percentiles (usec): 00:15:08.275 | 1.00th=[ 396], 5.00th=[ 791], 10.00th=[ 1029], 20.00th=[ 1188], 00:15:08.275 | 30.00th=[ 1287], 40.00th=[ 1385], 50.00th=[ 1467], 60.00th=[ 1549], 00:15:08.275 | 70.00th=[ 1631], 80.00th=[ 1729], 90.00th=[ 1926], 95.00th=[ 2442], 00:15:08.275 | 99.00th=[10421], 99.50th=[11731], 99.90th=[13698], 99.95th=[14615], 00:15:08.275 | 99.99th=[17171] 00:15:08.275 bw ( KiB/s): min=73640, max=175616, per=100.00%, avg=138859.60, stdev=26000.69, samples=10 00:15:08.275 iops : min=18410, max=43904, avg=34714.90, stdev=6500.17, samples=10 00:15:08.275 lat (usec) : 100=0.01%, 250=0.29%, 500=1.33%, 750=2.73%, 1000=4.55% 00:15:08.275 lat (msec) : 2=82.85%, 4=4.45%, 10=2.63%, 20=1.16% 00:15:08.275 cpu : usr=35.75%, sys=62.51%, ctx=32, majf=0, minf=763 00:15:08.275 IO depths : 1=1.2%, 2=2.4%, 4=4.9%, 8=10.2%, 16=22.0%, 32=56.6%, >=64=2.8% 00:15:08.275 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.275 complete : 0=0.0%, 4=97.9%, 8=0.2%, 16=0.2%, 32=0.2%, 64=1.4%, >=64=0.0% 00:15:08.275 issued rwts: total=0,173663,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.275 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:08.275 00:15:08.275 Run status group 0 (all jobs): 00:15:08.275 WRITE: bw=135MiB/s (142MB/s), 135MiB/s-135MiB/s (142MB/s-142MB/s), io=678MiB (711MB), run=5008-5008msec 00:15:08.275 ----------------------------------------------------- 00:15:08.275 Suppressions used: 00:15:08.275 count bytes template 00:15:08.275 1 11 /usr/src/fio/parse.c 00:15:08.275 1 8 libtcmalloc_minimal.so 00:15:08.275 1 904 libcrypto.so 00:15:08.275 ----------------------------------------------------- 00:15:08.275 00:15:08.275 00:15:08.275 real 0m13.630s 00:15:08.275 user 0m6.214s 00:15:08.275 sys 0m6.940s 00:15:08.275 17:01:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:08.275 17:01:42 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:08.275 ************************************ 00:15:08.275 END TEST xnvme_fio_plugin 00:15:08.275 ************************************ 00:15:08.275 17:01:42 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:08.275 17:01:42 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:08.275 17:01:42 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:08.275 17:01:42 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:08.275 17:01:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:08.275 17:01:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:08.275 17:01:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:08.275 ************************************ 00:15:08.275 START TEST xnvme_rpc 00:15:08.275 ************************************ 00:15:08.275 17:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:08.275 17:01:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:08.275 17:01:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:08.275 17:01:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:08.275 17:01:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:08.275 17:01:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71297 00:15:08.275 17:01:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71297 00:15:08.275 17:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71297 ']' 00:15:08.275 17:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.275 17:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:08.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.275 17:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.275 17:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:08.275 17:01:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.275 17:01:42 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:08.536 [2024-12-05 17:01:42.676536] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:15:08.536 [2024-12-05 17:01:42.676701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71297 ] 00:15:08.536 [2024-12-05 17:01:42.835618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.797 [2024-12-05 17:01:42.957054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.369 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:09.369 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:09.369 17:01:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:15:09.369 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.369 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.369 xnvme_bdev 00:15:09.369 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.369 17:01:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:09.369 17:01:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:09.369 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.369 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.369 17:01:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:09.369 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.369 17:01:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:09.369 17:01:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:09.369 17:01:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:09.369 17:01:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:09.369 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.369 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.369 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.369 17:01:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:09.369 17:01:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:09.369 17:01:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:09.369 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.369 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.369 17:01:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:09.631 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.631 17:01:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:09.631 17:01:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:09.631 17:01:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:09.631 17:01:43 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:09.631 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.631 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.631 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.631 17:01:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:09.631 17:01:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:09.631 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.631 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:09.631 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.631 17:01:43 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71297 00:15:09.631 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71297 ']' 00:15:09.631 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71297 00:15:09.631 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:09.631 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:09.631 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71297 00:15:09.631 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:09.631 killing process with pid 71297 00:15:09.631 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:09.631 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71297' 00:15:09.631 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71297 00:15:09.631 17:01:43 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71297 00:15:11.544 00:15:11.544 real 0m2.870s 00:15:11.544 user 0m2.863s 00:15:11.544 sys 0m0.491s 00:15:11.544 17:01:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:11.544 ************************************ 00:15:11.544 END TEST xnvme_rpc 00:15:11.544 ************************************ 00:15:11.544 17:01:45 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.544 17:01:45 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:11.544 17:01:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:11.544 17:01:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:11.544 17:01:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:11.544 ************************************ 00:15:11.544 START TEST xnvme_bdevperf 00:15:11.544 ************************************ 00:15:11.544 17:01:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:11.544 17:01:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:11.544 17:01:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:11.544 17:01:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:11.544 17:01:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:11.544 17:01:45 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:11.544 17:01:45 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:11.544 17:01:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:11.544 { 00:15:11.544 "subsystems": [ 00:15:11.544 { 00:15:11.544 "subsystem": "bdev", 00:15:11.544 "config": [ 00:15:11.544 { 00:15:11.544 "params": { 00:15:11.544 "io_mechanism": "io_uring_cmd", 00:15:11.544 "conserve_cpu": true, 00:15:11.544 "filename": "/dev/ng0n1", 00:15:11.544 "name": "xnvme_bdev" 00:15:11.544 }, 00:15:11.544 "method": "bdev_xnvme_create" 00:15:11.544 }, 00:15:11.544 { 00:15:11.544 "method": "bdev_wait_for_examine" 00:15:11.544 } 00:15:11.544 ] 00:15:11.544 } 00:15:11.544 ] 00:15:11.544 } 00:15:11.544 [2024-12-05 17:01:45.595586] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:15:11.544 [2024-12-05 17:01:45.595733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71367 ] 00:15:11.544 [2024-12-05 17:01:45.759484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.544 [2024-12-05 17:01:45.880111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.805 Running I/O for 5 seconds... 00:15:14.134 35179.00 IOPS, 137.42 MiB/s [2024-12-05T17:01:49.456Z] 35460.50 IOPS, 138.52 MiB/s [2024-12-05T17:01:50.400Z] 35977.33 IOPS, 140.54 MiB/s [2024-12-05T17:01:51.340Z] 37846.00 IOPS, 147.84 MiB/s 00:15:16.973 Latency(us) 00:15:16.973 [2024-12-05T17:01:51.340Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.973 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:16.973 xnvme_bdev : 5.00 39022.88 152.43 0.00 0.00 1636.17 683.72 9880.81 00:15:16.973 [2024-12-05T17:01:51.340Z] =================================================================================================================== 00:15:16.973 [2024-12-05T17:01:51.340Z] Total : 39022.88 152.43 0.00 0.00 1636.17 683.72 9880.81 00:15:17.912 17:01:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:17.912 17:01:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:17.912 17:01:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:17.912 17:01:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:17.912 17:01:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:17.912 { 00:15:17.912 "subsystems": [ 00:15:17.912 { 00:15:17.912 "subsystem": "bdev", 00:15:17.912 "config": [ 00:15:17.912 { 00:15:17.912 "params": { 00:15:17.912 "io_mechanism": "io_uring_cmd", 00:15:17.912 "conserve_cpu": true, 00:15:17.912 "filename": "/dev/ng0n1", 00:15:17.912 "name": "xnvme_bdev" 00:15:17.912 }, 00:15:17.912 "method": "bdev_xnvme_create" 00:15:17.912 }, 00:15:17.912 { 00:15:17.912 "method": "bdev_wait_for_examine" 00:15:17.912 } 00:15:17.912 ] 00:15:17.912 } 00:15:17.912 ] 00:15:17.912 } 00:15:17.912 [2024-12-05 17:01:52.024540] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:15:17.912 [2024-12-05 17:01:52.024706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71441 ] 00:15:17.912 [2024-12-05 17:01:52.190805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:18.171 [2024-12-05 17:01:52.308787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.432 Running I/O for 5 seconds... 00:15:20.314 39078.00 IOPS, 152.65 MiB/s [2024-12-05T17:01:55.637Z] 39330.50 IOPS, 153.63 MiB/s [2024-12-05T17:01:57.021Z] 38979.33 IOPS, 152.26 MiB/s [2024-12-05T17:01:57.962Z] 39688.50 IOPS, 155.03 MiB/s [2024-12-05T17:01:57.962Z] 40129.80 IOPS, 156.76 MiB/s 00:15:23.595 Latency(us) 00:15:23.595 [2024-12-05T17:01:57.962Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.595 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:23.595 xnvme_bdev : 5.01 40091.67 156.61 0.00 0.00 1591.78 611.25 6805.66 00:15:23.595 [2024-12-05T17:01:57.962Z] =================================================================================================================== 00:15:23.595 [2024-12-05T17:01:57.962Z] Total : 40091.67 156.61 0.00 0.00 1591.78 611.25 6805.66 00:15:24.167 17:01:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:24.167 17:01:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:24.167 17:01:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:24.167 17:01:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:24.167 17:01:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:24.167 { 00:15:24.167 "subsystems": [ 00:15:24.167 { 00:15:24.167 "subsystem": "bdev", 00:15:24.167 "config": [ 00:15:24.167 { 00:15:24.167 "params": { 00:15:24.167 "io_mechanism": "io_uring_cmd", 00:15:24.167 "conserve_cpu": true, 00:15:24.167 "filename": "/dev/ng0n1", 00:15:24.167 "name": "xnvme_bdev" 00:15:24.167 }, 00:15:24.167 "method": "bdev_xnvme_create" 00:15:24.167 }, 00:15:24.167 { 00:15:24.167 "method": "bdev_wait_for_examine" 00:15:24.167 } 00:15:24.167 ] 00:15:24.167 } 00:15:24.167 ] 00:15:24.167 } 00:15:24.167 [2024-12-05 17:01:58.488281] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:15:24.167 [2024-12-05 17:01:58.488429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71521 ] 00:15:24.428 [2024-12-05 17:01:58.654628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.428 [2024-12-05 17:01:58.771708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.999 Running I/O for 5 seconds... 00:15:26.886 80064.00 IOPS, 312.75 MiB/s [2024-12-05T17:02:02.191Z] 79552.00 IOPS, 310.75 MiB/s [2024-12-05T17:02:03.129Z] 79829.33 IOPS, 311.83 MiB/s [2024-12-05T17:02:04.071Z] 82880.00 IOPS, 323.75 MiB/s [2024-12-05T17:02:04.071Z] 85644.80 IOPS, 334.55 MiB/s 00:15:29.704 Latency(us) 00:15:29.704 [2024-12-05T17:02:04.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:29.704 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:29.704 xnvme_bdev : 5.00 85609.18 334.41 0.00 0.00 744.21 403.30 4234.63 00:15:29.704 [2024-12-05T17:02:04.071Z] =================================================================================================================== 00:15:29.704 [2024-12-05T17:02:04.071Z] Total : 85609.18 334.41 0.00 0.00 744.21 403.30 4234.63 00:15:30.309 17:02:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:30.309 17:02:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:30.309 17:02:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:30.309 17:02:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:30.309 17:02:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:30.605 { 00:15:30.605 "subsystems": [ 00:15:30.605 { 00:15:30.605 "subsystem": "bdev", 00:15:30.605 "config": [ 00:15:30.605 { 00:15:30.605 "params": { 00:15:30.605 "io_mechanism": "io_uring_cmd", 00:15:30.605 "conserve_cpu": true, 00:15:30.605 "filename": "/dev/ng0n1", 00:15:30.605 "name": "xnvme_bdev" 00:15:30.605 }, 00:15:30.605 "method": "bdev_xnvme_create" 00:15:30.605 }, 00:15:30.605 { 00:15:30.606 "method": "bdev_wait_for_examine" 00:15:30.606 } 00:15:30.606 ] 00:15:30.606 } 00:15:30.606 ] 00:15:30.606 } 00:15:30.606 [2024-12-05 17:02:04.694574] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:15:30.606 [2024-12-05 17:02:04.694703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71590 ] 00:15:30.606 [2024-12-05 17:02:04.850571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.606 [2024-12-05 17:02:04.925110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.869 Running I/O for 5 seconds... 00:15:33.194 39519.00 IOPS, 154.37 MiB/s [2024-12-05T17:02:08.132Z] 28886.00 IOPS, 112.84 MiB/s [2024-12-05T17:02:09.519Z] 25337.67 IOPS, 98.98 MiB/s [2024-12-05T17:02:10.462Z] 23578.00 IOPS, 92.10 MiB/s [2024-12-05T17:02:10.462Z] 22346.60 IOPS, 87.29 MiB/s 00:15:36.095 Latency(us) 00:15:36.095 [2024-12-05T17:02:10.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.095 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:36.095 xnvme_bdev : 5.01 22334.93 87.25 0.00 0.00 2859.81 53.96 28835.84 00:15:36.095 [2024-12-05T17:02:10.462Z] =================================================================================================================== 00:15:36.095 [2024-12-05T17:02:10.462Z] Total : 22334.93 87.25 0.00 0.00 2859.81 53.96 28835.84 00:15:36.667 00:15:36.667 real 0m25.395s 00:15:36.667 user 0m16.750s 00:15:36.667 sys 0m6.839s 00:15:36.667 17:02:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:36.667 ************************************ 00:15:36.667 END TEST xnvme_bdevperf 00:15:36.667 ************************************ 00:15:36.667 17:02:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:36.667 17:02:10 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:36.667 17:02:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:36.667 17:02:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:36.667 17:02:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:36.667 ************************************ 00:15:36.667 START TEST xnvme_fio_plugin 00:15:36.667 ************************************ 00:15:36.667 17:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:36.667 17:02:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:36.667 17:02:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:36.667 17:02:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:36.667 17:02:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:36.667 17:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:36.667 17:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:36.667 17:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:36.667 17:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:36.667 17:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:36.667 17:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:36.667 17:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:36.667 17:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:36.667 17:02:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:36.667 17:02:10 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:36.667 17:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:36.667 17:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:36.667 17:02:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:36.667 17:02:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:36.667 17:02:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:36.667 17:02:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:36.667 17:02:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:36.667 17:02:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:36.667 17:02:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:36.667 { 00:15:36.667 "subsystems": [ 00:15:36.667 { 00:15:36.667 "subsystem": "bdev", 00:15:36.667 "config": [ 00:15:36.667 { 00:15:36.667 "params": { 00:15:36.667 "io_mechanism": "io_uring_cmd", 00:15:36.667 "conserve_cpu": true, 00:15:36.667 "filename": "/dev/ng0n1", 00:15:36.667 "name": "xnvme_bdev" 00:15:36.667 }, 00:15:36.667 "method": "bdev_xnvme_create" 00:15:36.667 }, 00:15:36.667 { 00:15:36.667 "method": "bdev_wait_for_examine" 00:15:36.667 } 00:15:36.667 ] 00:15:36.667 } 00:15:36.667 ] 00:15:36.667 } 00:15:36.928 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:36.928 fio-3.35 00:15:36.928 Starting 1 thread 00:15:43.518 00:15:43.518 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71708: Thu Dec 5 17:02:16 2024 00:15:43.518 read: IOPS=43.7k, BW=171MiB/s (179MB/s)(854MiB/5001msec) 00:15:43.518 slat (usec): min=2, max=118, avg= 3.09, stdev= 1.23 00:15:43.518 clat (usec): min=489, max=3814, avg=1340.97, stdev=246.58 00:15:43.518 lat (usec): min=492, max=3856, avg=1344.06, stdev=246.69 00:15:43.518 clat percentiles (usec): 00:15:43.518 | 1.00th=[ 979], 5.00th=[ 1057], 10.00th=[ 1090], 20.00th=[ 1156], 00:15:43.518 | 30.00th=[ 1188], 40.00th=[ 1237], 50.00th=[ 1287], 60.00th=[ 1336], 00:15:43.518 | 70.00th=[ 1401], 80.00th=[ 1500], 90.00th=[ 1696], 95.00th=[ 1844], 00:15:43.518 | 99.00th=[ 2147], 99.50th=[ 2212], 99.90th=[ 2474], 99.95th=[ 2573], 00:15:43.518 | 99.99th=[ 3589] 00:15:43.518 bw ( KiB/s): min=166400, max=183808, per=99.78%, avg=174494.11, stdev=6622.06, samples=9 00:15:43.518 iops : min=41600, max=45952, avg=43623.44, stdev=1655.43, samples=9 00:15:43.518 lat (usec) : 500=0.01%, 750=0.01%, 1000=1.50% 00:15:43.518 lat (msec) : 2=96.16%, 4=2.33% 00:15:43.518 cpu : usr=80.62%, sys=16.70%, ctx=16, majf=0, minf=762 00:15:43.518 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:43.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:43.518 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:43.518 issued rwts: total=218648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:43.518 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:43.518 00:15:43.518 Run status group 0 (all jobs): 00:15:43.518 READ: bw=171MiB/s (179MB/s), 171MiB/s-171MiB/s (179MB/s-179MB/s), io=854MiB (896MB), run=5001-5001msec 00:15:43.779 ----------------------------------------------------- 00:15:43.779 Suppressions used: 00:15:43.779 count bytes template 00:15:43.779 1 11 /usr/src/fio/parse.c 00:15:43.779 1 8 libtcmalloc_minimal.so 00:15:43.779 1 904 libcrypto.so 00:15:43.779 ----------------------------------------------------- 00:15:43.779 00:15:43.779 17:02:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:43.779 17:02:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:43.779 17:02:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:43.779 17:02:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:43.779 17:02:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:43.779 17:02:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:43.779 17:02:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:43.779 17:02:17 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:43.779 17:02:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:43.779 17:02:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:43.779 17:02:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:43.779 17:02:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:43.779 17:02:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:43.779 17:02:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:43.779 17:02:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:43.779 17:02:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:43.779 17:02:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:43.779 17:02:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:43.779 17:02:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:43.779 17:02:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:43.779 17:02:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:43.779 { 00:15:43.779 "subsystems": [ 00:15:43.779 { 00:15:43.779 "subsystem": "bdev", 00:15:43.779 "config": [ 00:15:43.779 { 00:15:43.779 "params": { 00:15:43.779 "io_mechanism": "io_uring_cmd", 00:15:43.779 "conserve_cpu": true, 00:15:43.779 "filename": "/dev/ng0n1", 00:15:43.779 "name": "xnvme_bdev" 00:15:43.779 }, 00:15:43.779 "method": "bdev_xnvme_create" 00:15:43.779 }, 00:15:43.779 { 00:15:43.779 "method": "bdev_wait_for_examine" 00:15:43.779 } 00:15:43.779 ] 00:15:43.779 } 00:15:43.779 ] 00:15:43.779 } 00:15:44.039 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:44.039 fio-3.35 00:15:44.039 Starting 1 thread 00:15:50.618 00:15:50.618 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71800: Thu Dec 5 17:02:23 2024 00:15:50.618 write: IOPS=38.9k, BW=152MiB/s (159MB/s)(760MiB/5001msec); 0 zone resets 00:15:50.618 slat (usec): min=2, max=103, avg= 3.87, stdev= 1.86 00:15:50.618 clat (usec): min=70, max=24051, avg=1496.67, stdev=1395.48 00:15:50.618 lat (usec): min=73, max=24055, avg=1500.54, stdev=1395.57 00:15:50.618 clat percentiles (usec): 00:15:50.618 | 1.00th=[ 922], 5.00th=[ 1057], 10.00th=[ 1106], 20.00th=[ 1172], 00:15:50.618 | 30.00th=[ 1221], 40.00th=[ 1287], 50.00th=[ 1352], 60.00th=[ 1418], 00:15:50.618 | 70.00th=[ 1483], 80.00th=[ 1582], 90.00th=[ 1729], 95.00th=[ 1876], 00:15:50.618 | 99.00th=[ 2573], 99.50th=[15795], 99.90th=[20317], 99.95th=[21365], 00:15:50.618 | 99.99th=[22676] 00:15:50.618 bw ( KiB/s): min=73336, max=180472, per=98.95%, avg=153895.78, stdev=32089.53, samples=9 00:15:50.618 iops : min=18334, max=45118, avg=38473.89, stdev=8022.38, samples=9 00:15:50.618 lat (usec) : 100=0.01%, 250=0.13%, 500=0.24%, 750=0.31%, 1000=1.52% 00:15:50.618 lat (msec) : 2=94.78%, 4=2.23%, 10=0.03%, 20=0.63%, 50=0.12% 00:15:50.618 cpu : usr=62.04%, sys=33.04%, ctx=22, majf=0, minf=763 00:15:50.618 IO depths : 1=1.4%, 2=2.9%, 4=6.0%, 8=12.2%, 16=24.6%, 32=50.8%, >=64=2.1% 00:15:50.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.618 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:50.618 issued rwts: total=0,194458,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:50.618 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:50.618 00:15:50.618 Run status group 0 (all jobs): 00:15:50.618 WRITE: bw=152MiB/s (159MB/s), 152MiB/s-152MiB/s (159MB/s-159MB/s), io=760MiB (796MB), run=5001-5001msec 00:15:50.618 ----------------------------------------------------- 00:15:50.618 Suppressions used: 00:15:50.618 count bytes template 00:15:50.618 1 11 /usr/src/fio/parse.c 00:15:50.618 1 8 libtcmalloc_minimal.so 00:15:50.618 1 904 libcrypto.so 00:15:50.618 ----------------------------------------------------- 00:15:50.618 00:15:50.618 00:15:50.618 real 0m13.859s 00:15:50.618 user 0m10.028s 00:15:50.618 sys 0m3.126s 00:15:50.618 ************************************ 00:15:50.618 END TEST xnvme_fio_plugin 00:15:50.618 ************************************ 00:15:50.618 17:02:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.618 17:02:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:50.618 17:02:24 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71297 00:15:50.618 17:02:24 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71297 ']' 00:15:50.618 17:02:24 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71297 00:15:50.618 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71297) - No such process 00:15:50.618 Process with pid 71297 is not found 00:15:50.618 17:02:24 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71297 is not found' 00:15:50.618 00:15:50.618 real 3m31.041s 00:15:50.618 user 2m3.553s 00:15:50.618 sys 1m12.901s 00:15:50.618 ************************************ 00:15:50.618 END TEST nvme_xnvme 00:15:50.618 ************************************ 00:15:50.618 17:02:24 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.618 17:02:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.618 17:02:24 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:50.618 17:02:24 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:50.618 17:02:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.618 17:02:24 -- common/autotest_common.sh@10 -- # set +x 00:15:50.618 ************************************ 00:15:50.618 START TEST blockdev_xnvme 00:15:50.618 ************************************ 00:15:50.618 17:02:24 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:50.878 * Looking for test storage... 00:15:50.878 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:50.878 17:02:25 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:50.878 17:02:25 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:50.878 17:02:25 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:50.878 17:02:25 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:50.878 17:02:25 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:50.878 17:02:25 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:50.878 17:02:25 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:50.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.878 --rc genhtml_branch_coverage=1 00:15:50.878 --rc genhtml_function_coverage=1 00:15:50.878 --rc genhtml_legend=1 00:15:50.878 --rc geninfo_all_blocks=1 00:15:50.878 --rc geninfo_unexecuted_blocks=1 00:15:50.878 00:15:50.878 ' 00:15:50.878 17:02:25 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:50.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.878 --rc genhtml_branch_coverage=1 00:15:50.878 --rc genhtml_function_coverage=1 00:15:50.878 --rc genhtml_legend=1 00:15:50.878 --rc geninfo_all_blocks=1 00:15:50.878 --rc geninfo_unexecuted_blocks=1 00:15:50.878 00:15:50.878 ' 00:15:50.878 17:02:25 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:50.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.878 --rc genhtml_branch_coverage=1 00:15:50.878 --rc genhtml_function_coverage=1 00:15:50.878 --rc genhtml_legend=1 00:15:50.878 --rc geninfo_all_blocks=1 00:15:50.878 --rc geninfo_unexecuted_blocks=1 00:15:50.878 00:15:50.878 ' 00:15:50.878 17:02:25 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:50.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.878 --rc genhtml_branch_coverage=1 00:15:50.878 --rc genhtml_function_coverage=1 00:15:50.878 --rc genhtml_legend=1 00:15:50.878 --rc geninfo_all_blocks=1 00:15:50.878 --rc geninfo_unexecuted_blocks=1 00:15:50.878 00:15:50.878 ' 00:15:50.878 17:02:25 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:50.878 17:02:25 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:50.878 17:02:25 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:50.878 17:02:25 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:50.878 17:02:25 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:50.878 17:02:25 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:50.878 17:02:25 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:50.878 17:02:25 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:50.878 17:02:25 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:50.878 17:02:25 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:50.878 17:02:25 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:50.878 17:02:25 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:50.878 17:02:25 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:50.878 17:02:25 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:50.878 17:02:25 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:50.878 17:02:25 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:50.878 17:02:25 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:50.878 17:02:25 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:50.878 17:02:25 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:50.878 17:02:25 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:50.879 17:02:25 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:50.879 17:02:25 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:50.879 17:02:25 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:50.879 17:02:25 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:50.879 17:02:25 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=71934 00:15:50.879 17:02:25 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:50.879 17:02:25 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 71934 00:15:50.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.879 17:02:25 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 71934 ']' 00:15:50.879 17:02:25 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.879 17:02:25 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:50.879 17:02:25 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.879 17:02:25 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:50.879 17:02:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.879 17:02:25 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:50.879 [2024-12-05 17:02:25.226266] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:15:50.879 [2024-12-05 17:02:25.226422] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71934 ] 00:15:51.138 [2024-12-05 17:02:25.392109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:51.397 [2024-12-05 17:02:25.514402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.965 17:02:26 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:51.965 17:02:26 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:51.965 17:02:26 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:51.965 17:02:26 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:51.965 17:02:26 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:51.965 17:02:26 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:51.965 17:02:26 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:52.535 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:53.109 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:53.109 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:53.109 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:53.109 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:53.109 nvme0n1 00:15:53.109 nvme0n2 00:15:53.109 nvme0n3 00:15:53.109 nvme1n1 00:15:53.109 nvme2n1 00:15:53.109 nvme3n1 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.109 17:02:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.109 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:53.369 17:02:27 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.369 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:53.369 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:53.369 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "ed945d8f-8d75-4403-ac3c-6015f7053489"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ed945d8f-8d75-4403-ac3c-6015f7053489",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "f15c963e-b491-4b08-8be8-3fd5fd6f15eb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f15c963e-b491-4b08-8be8-3fd5fd6f15eb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "4636bc36-2704-4f5b-a7bb-100a0f74aaee"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4636bc36-2704-4f5b-a7bb-100a0f74aaee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "d80d1be9-ed61-4753-ace0-77d598da975d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d80d1be9-ed61-4753-ace0-77d598da975d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "1d601018-b7f4-4939-8986-453449100fc7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "1d601018-b7f4-4939-8986-453449100fc7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "7c24dca5-1ea0-4f70-abba-6bd32f992a43"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7c24dca5-1ea0-4f70-abba-6bd32f992a43",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:53.369 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:53.369 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:53.369 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:53.369 17:02:27 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 71934 00:15:53.369 17:02:27 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71934 ']' 00:15:53.369 17:02:27 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 71934 00:15:53.369 17:02:27 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:53.369 17:02:27 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:53.369 17:02:27 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71934 00:15:53.369 17:02:27 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:53.369 17:02:27 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:53.369 17:02:27 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71934' 00:15:53.369 killing process with pid 71934 00:15:53.369 17:02:27 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 71934 00:15:53.369 17:02:27 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 71934 00:15:55.278 17:02:29 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:55.278 17:02:29 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:55.278 17:02:29 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:55.278 17:02:29 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:55.278 17:02:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:55.278 ************************************ 00:15:55.278 START TEST bdev_hello_world 00:15:55.278 ************************************ 00:15:55.278 17:02:29 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:55.278 [2024-12-05 17:02:29.305482] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:15:55.278 [2024-12-05 17:02:29.305619] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72214 ] 00:15:55.278 [2024-12-05 17:02:29.470978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.278 [2024-12-05 17:02:29.591578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.849 [2024-12-05 17:02:30.000865] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:55.849 [2024-12-05 17:02:30.000917] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:55.849 [2024-12-05 17:02:30.000936] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:55.849 [2024-12-05 17:02:30.003121] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:55.849 [2024-12-05 17:02:30.004741] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:55.849 [2024-12-05 17:02:30.004791] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:55.849 [2024-12-05 17:02:30.005328] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:55.849 00:15:55.849 [2024-12-05 17:02:30.005367] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:56.790 00:15:56.790 real 0m1.578s 00:15:56.790 user 0m1.178s 00:15:56.790 sys 0m0.250s 00:15:56.790 17:02:30 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:56.790 ************************************ 00:15:56.790 END TEST bdev_hello_world 00:15:56.790 ************************************ 00:15:56.790 17:02:30 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:56.790 17:02:30 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:56.790 17:02:30 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:56.790 17:02:30 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:56.790 17:02:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:56.790 ************************************ 00:15:56.790 START TEST bdev_bounds 00:15:56.790 ************************************ 00:15:56.790 17:02:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:56.790 17:02:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72249 00:15:56.790 17:02:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:56.790 Process bdevio pid: 72249 00:15:56.790 17:02:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72249' 00:15:56.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.790 17:02:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72249 00:15:56.790 17:02:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72249 ']' 00:15:56.790 17:02:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.790 17:02:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:56.790 17:02:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.790 17:02:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:56.790 17:02:30 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:56.790 17:02:30 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:56.790 [2024-12-05 17:02:30.963401] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:15:56.790 [2024-12-05 17:02:30.963555] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72249 ] 00:15:56.790 [2024-12-05 17:02:31.129582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:57.051 [2024-12-05 17:02:31.260145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.051 [2024-12-05 17:02:31.260520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.051 [2024-12-05 17:02:31.260605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.623 17:02:31 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:57.623 17:02:31 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:57.623 17:02:31 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:57.623 I/O targets: 00:15:57.623 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:57.623 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:57.623 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:57.623 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:57.623 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:57.623 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:57.623 00:15:57.623 00:15:57.623 CUnit - A unit testing framework for C - Version 2.1-3 00:15:57.623 http://cunit.sourceforge.net/ 00:15:57.623 00:15:57.623 00:15:57.623 Suite: bdevio tests on: nvme3n1 00:15:57.623 Test: blockdev write read block ...passed 00:15:57.623 Test: blockdev write zeroes read block ...passed 00:15:57.623 Test: blockdev write zeroes read no split ...passed 00:15:57.623 Test: blockdev write zeroes read split ...passed 00:15:57.623 Test: blockdev write zeroes read split partial ...passed 00:15:57.623 Test: blockdev reset ...passed 00:15:57.623 Test: blockdev write read 8 blocks ...passed 00:15:57.623 Test: blockdev write read size > 128k ...passed 00:15:57.623 Test: blockdev write read invalid size ...passed 00:15:57.623 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:57.623 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:57.623 Test: blockdev write read max offset ...passed 00:15:57.884 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:57.884 Test: blockdev writev readv 8 blocks ...passed 00:15:57.884 Test: blockdev writev readv 30 x 1block ...passed 00:15:57.884 Test: blockdev writev readv block ...passed 00:15:57.884 Test: blockdev writev readv size > 128k ...passed 00:15:57.884 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:57.884 Test: blockdev comparev and writev ...passed 00:15:57.884 Test: blockdev nvme passthru rw ...passed 00:15:57.884 Test: blockdev nvme passthru vendor specific ...passed 00:15:57.884 Test: blockdev nvme admin passthru ...passed 00:15:57.884 Test: blockdev copy ...passed 00:15:57.884 Suite: bdevio tests on: nvme2n1 00:15:57.884 Test: blockdev write read block ...passed 00:15:57.884 Test: blockdev write zeroes read block ...passed 00:15:57.884 Test: blockdev write zeroes read no split ...passed 00:15:57.884 Test: blockdev write zeroes read split ...passed 00:15:57.884 Test: blockdev write zeroes read split partial ...passed 00:15:57.884 Test: blockdev reset ...passed 00:15:57.884 Test: blockdev write read 8 blocks ...passed 00:15:57.884 Test: blockdev write read size > 128k ...passed 00:15:57.884 Test: blockdev write read invalid size ...passed 00:15:57.884 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:57.884 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:57.884 Test: blockdev write read max offset ...passed 00:15:57.884 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:57.884 Test: blockdev writev readv 8 blocks ...passed 00:15:57.884 Test: blockdev writev readv 30 x 1block ...passed 00:15:57.884 Test: blockdev writev readv block ...passed 00:15:57.884 Test: blockdev writev readv size > 128k ...passed 00:15:57.884 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:57.884 Test: blockdev comparev and writev ...passed 00:15:57.884 Test: blockdev nvme passthru rw ...passed 00:15:57.884 Test: blockdev nvme passthru vendor specific ...passed 00:15:57.884 Test: blockdev nvme admin passthru ...passed 00:15:57.884 Test: blockdev copy ...passed 00:15:57.884 Suite: bdevio tests on: nvme1n1 00:15:57.884 Test: blockdev write read block ...passed 00:15:57.884 Test: blockdev write zeroes read block ...passed 00:15:57.884 Test: blockdev write zeroes read no split ...passed 00:15:57.884 Test: blockdev write zeroes read split ...passed 00:15:57.884 Test: blockdev write zeroes read split partial ...passed 00:15:57.884 Test: blockdev reset ...passed 00:15:57.884 Test: blockdev write read 8 blocks ...passed 00:15:57.884 Test: blockdev write read size > 128k ...passed 00:15:57.884 Test: blockdev write read invalid size ...passed 00:15:57.884 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:57.884 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:57.884 Test: blockdev write read max offset ...passed 00:15:57.884 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:57.884 Test: blockdev writev readv 8 blocks ...passed 00:15:57.884 Test: blockdev writev readv 30 x 1block ...passed 00:15:57.884 Test: blockdev writev readv block ...passed 00:15:57.884 Test: blockdev writev readv size > 128k ...passed 00:15:57.884 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:57.884 Test: blockdev comparev and writev ...passed 00:15:57.884 Test: blockdev nvme passthru rw ...passed 00:15:57.884 Test: blockdev nvme passthru vendor specific ...passed 00:15:57.884 Test: blockdev nvme admin passthru ...passed 00:15:57.884 Test: blockdev copy ...passed 00:15:57.884 Suite: bdevio tests on: nvme0n3 00:15:57.884 Test: blockdev write read block ...passed 00:15:57.884 Test: blockdev write zeroes read block ...passed 00:15:57.884 Test: blockdev write zeroes read no split ...passed 00:15:57.884 Test: blockdev write zeroes read split ...passed 00:15:57.884 Test: blockdev write zeroes read split partial ...passed 00:15:57.884 Test: blockdev reset ...passed 00:15:57.884 Test: blockdev write read 8 blocks ...passed 00:15:57.884 Test: blockdev write read size > 128k ...passed 00:15:57.884 Test: blockdev write read invalid size ...passed 00:15:57.884 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:57.884 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:57.884 Test: blockdev write read max offset ...passed 00:15:57.884 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:57.884 Test: blockdev writev readv 8 blocks ...passed 00:15:58.146 Test: blockdev writev readv 30 x 1block ...passed 00:15:58.146 Test: blockdev writev readv block ...passed 00:15:58.146 Test: blockdev writev readv size > 128k ...passed 00:15:58.146 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:58.146 Test: blockdev comparev and writev ...passed 00:15:58.146 Test: blockdev nvme passthru rw ...passed 00:15:58.146 Test: blockdev nvme passthru vendor specific ...passed 00:15:58.146 Test: blockdev nvme admin passthru ...passed 00:15:58.146 Test: blockdev copy ...passed 00:15:58.146 Suite: bdevio tests on: nvme0n2 00:15:58.146 Test: blockdev write read block ...passed 00:15:58.146 Test: blockdev write zeroes read block ...passed 00:15:58.146 Test: blockdev write zeroes read no split ...passed 00:15:58.146 Test: blockdev write zeroes read split ...passed 00:15:58.146 Test: blockdev write zeroes read split partial ...passed 00:15:58.146 Test: blockdev reset ...passed 00:15:58.146 Test: blockdev write read 8 blocks ...passed 00:15:58.146 Test: blockdev write read size > 128k ...passed 00:15:58.146 Test: blockdev write read invalid size ...passed 00:15:58.146 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:58.146 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:58.146 Test: blockdev write read max offset ...passed 00:15:58.146 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:58.146 Test: blockdev writev readv 8 blocks ...passed 00:15:58.146 Test: blockdev writev readv 30 x 1block ...passed 00:15:58.146 Test: blockdev writev readv block ...passed 00:15:58.146 Test: blockdev writev readv size > 128k ...passed 00:15:58.146 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:58.146 Test: blockdev comparev and writev ...passed 00:15:58.146 Test: blockdev nvme passthru rw ...passed 00:15:58.146 Test: blockdev nvme passthru vendor specific ...passed 00:15:58.146 Test: blockdev nvme admin passthru ...passed 00:15:58.146 Test: blockdev copy ...passed 00:15:58.146 Suite: bdevio tests on: nvme0n1 00:15:58.146 Test: blockdev write read block ...passed 00:15:58.146 Test: blockdev write zeroes read block ...passed 00:15:58.146 Test: blockdev write zeroes read no split ...passed 00:15:58.406 Test: blockdev write zeroes read split ...passed 00:15:58.406 Test: blockdev write zeroes read split partial ...passed 00:15:58.406 Test: blockdev reset ...passed 00:15:58.406 Test: blockdev write read 8 blocks ...passed 00:15:58.406 Test: blockdev write read size > 128k ...passed 00:15:58.406 Test: blockdev write read invalid size ...passed 00:15:58.406 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:58.406 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:58.406 Test: blockdev write read max offset ...passed 00:15:58.406 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:58.406 Test: blockdev writev readv 8 blocks ...passed 00:15:58.406 Test: blockdev writev readv 30 x 1block ...passed 00:15:58.406 Test: blockdev writev readv block ...passed 00:15:58.406 Test: blockdev writev readv size > 128k ...passed 00:15:58.406 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:58.406 Test: blockdev comparev and writev ...passed 00:15:58.406 Test: blockdev nvme passthru rw ...passed 00:15:58.406 Test: blockdev nvme passthru vendor specific ...passed 00:15:58.406 Test: blockdev nvme admin passthru ...passed 00:15:58.406 Test: blockdev copy ...passed 00:15:58.406 00:15:58.406 Run Summary: Type Total Ran Passed Failed Inactive 00:15:58.406 suites 6 6 n/a 0 0 00:15:58.406 tests 138 138 138 0 0 00:15:58.406 asserts 780 780 780 0 n/a 00:15:58.406 00:15:58.406 Elapsed time = 1.890 seconds 00:15:58.406 0 00:15:58.406 17:02:32 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72249 00:15:58.406 17:02:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72249 ']' 00:15:58.406 17:02:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72249 00:15:58.406 17:02:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:58.406 17:02:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:58.406 17:02:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72249 00:15:58.667 17:02:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:58.667 17:02:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:58.667 17:02:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72249' 00:15:58.667 killing process with pid 72249 00:15:58.667 17:02:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72249 00:15:58.667 17:02:32 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72249 00:15:59.239 17:02:33 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:59.239 00:15:59.239 real 0m2.696s 00:15:59.239 user 0m6.400s 00:15:59.239 sys 0m0.396s 00:15:59.239 17:02:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:59.239 ************************************ 00:15:59.239 END TEST bdev_bounds 00:15:59.239 ************************************ 00:15:59.240 17:02:33 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:59.501 17:02:33 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:59.501 17:02:33 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:59.501 17:02:33 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:59.501 17:02:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:59.501 ************************************ 00:15:59.501 START TEST bdev_nbd 00:15:59.501 ************************************ 00:15:59.501 17:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:59.501 17:02:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:59.501 17:02:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:59.501 17:02:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:59.501 17:02:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:59.501 17:02:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:59.501 17:02:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:59.501 17:02:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:59.501 17:02:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:59.501 17:02:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:59.501 17:02:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:59.501 17:02:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:59.501 17:02:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:59.501 17:02:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:59.501 17:02:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:59.501 17:02:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:59.501 17:02:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72312 00:15:59.501 17:02:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:59.501 17:02:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72312 /var/tmp/spdk-nbd.sock 00:15:59.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:59.501 17:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72312 ']' 00:15:59.501 17:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:59.501 17:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:59.501 17:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:59.501 17:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:59.501 17:02:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:59.501 17:02:33 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:59.501 [2024-12-05 17:02:33.740878] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:15:59.501 [2024-12-05 17:02:33.741041] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.763 [2024-12-05 17:02:33.904600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.763 [2024-12-05 17:02:34.028269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.335 17:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:00.335 17:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:16:00.335 17:02:34 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:16:00.335 17:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:00.335 17:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:00.335 17:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:16:00.335 17:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:16:00.335 17:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:00.335 17:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:00.335 17:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:16:00.335 17:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:16:00.335 17:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:16:00.335 17:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:16:00.335 17:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:00.335 17:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:16:00.596 17:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:16:00.596 17:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:16:00.596 17:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:16:00.596 17:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:00.596 17:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:00.596 17:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:00.596 17:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:00.596 17:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:00.596 17:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:00.596 17:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:00.597 17:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:00.597 17:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.597 1+0 records in 00:16:00.597 1+0 records out 00:16:00.597 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000813105 s, 5.0 MB/s 00:16:00.597 17:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.597 17:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:00.597 17:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.597 17:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:00.597 17:02:34 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:00.597 17:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:00.597 17:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:00.597 17:02:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:16:00.856 17:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:16:00.856 17:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:16:00.856 17:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:16:00.856 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:00.856 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:00.856 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:00.856 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:00.856 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:00.857 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:00.857 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:00.857 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:00.857 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.857 1+0 records in 00:16:00.857 1+0 records out 00:16:00.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100821 s, 4.1 MB/s 00:16:00.857 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.857 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:00.857 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.857 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:00.857 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:00.857 17:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:00.857 17:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:00.857 17:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:16:01.117 17:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:16:01.117 17:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:16:01.117 17:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:16:01.117 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:16:01.117 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:01.117 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:01.117 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:01.117 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:16:01.117 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:01.117 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:01.117 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:01.117 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:01.117 1+0 records in 00:16:01.117 1+0 records out 00:16:01.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0013999 s, 2.9 MB/s 00:16:01.117 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.117 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:01.117 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.117 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:01.117 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:01.117 17:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:01.117 17:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:01.117 17:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:16:01.377 17:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:16:01.377 17:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:16:01.377 17:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:16:01.377 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:16:01.377 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:01.377 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:01.377 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:01.377 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:16:01.377 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:01.377 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:01.377 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:01.377 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:01.377 1+0 records in 00:16:01.377 1+0 records out 00:16:01.377 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0012797 s, 3.2 MB/s 00:16:01.377 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.377 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:01.377 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.377 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:01.377 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:01.377 17:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:01.377 17:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:01.377 17:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:16:01.638 17:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:16:01.638 17:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:16:01.638 17:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:16:01.638 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:16:01.638 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:01.638 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:01.638 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:01.638 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:16:01.638 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:01.638 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:01.638 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:01.638 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:01.638 1+0 records in 00:16:01.638 1+0 records out 00:16:01.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118706 s, 3.5 MB/s 00:16:01.638 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.638 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:01.638 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.638 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:01.638 17:02:35 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:01.638 17:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:01.638 17:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:01.638 17:02:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:16:01.899 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:16:01.899 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:16:01.899 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:16:01.899 17:02:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:16:01.899 17:02:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:01.899 17:02:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:01.899 17:02:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:01.899 17:02:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:16:01.899 17:02:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:01.899 17:02:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:01.899 17:02:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:01.900 17:02:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:01.900 1+0 records in 00:16:01.900 1+0 records out 00:16:01.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571235 s, 7.2 MB/s 00:16:01.900 17:02:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.900 17:02:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:01.900 17:02:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:01.900 17:02:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:01.900 17:02:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:01.900 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:16:01.900 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:16:01.900 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:02.159 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:16:02.159 { 00:16:02.159 "nbd_device": "/dev/nbd0", 00:16:02.159 "bdev_name": "nvme0n1" 00:16:02.159 }, 00:16:02.159 { 00:16:02.159 "nbd_device": "/dev/nbd1", 00:16:02.159 "bdev_name": "nvme0n2" 00:16:02.159 }, 00:16:02.159 { 00:16:02.159 "nbd_device": "/dev/nbd2", 00:16:02.159 "bdev_name": "nvme0n3" 00:16:02.159 }, 00:16:02.159 { 00:16:02.159 "nbd_device": "/dev/nbd3", 00:16:02.159 "bdev_name": "nvme1n1" 00:16:02.159 }, 00:16:02.159 { 00:16:02.159 "nbd_device": "/dev/nbd4", 00:16:02.159 "bdev_name": "nvme2n1" 00:16:02.159 }, 00:16:02.159 { 00:16:02.159 "nbd_device": "/dev/nbd5", 00:16:02.159 "bdev_name": "nvme3n1" 00:16:02.159 } 00:16:02.159 ]' 00:16:02.159 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:16:02.159 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:16:02.159 { 00:16:02.159 "nbd_device": "/dev/nbd0", 00:16:02.159 "bdev_name": "nvme0n1" 00:16:02.159 }, 00:16:02.159 { 00:16:02.159 "nbd_device": "/dev/nbd1", 00:16:02.159 "bdev_name": "nvme0n2" 00:16:02.159 }, 00:16:02.159 { 00:16:02.159 "nbd_device": "/dev/nbd2", 00:16:02.159 "bdev_name": "nvme0n3" 00:16:02.159 }, 00:16:02.159 { 00:16:02.159 "nbd_device": "/dev/nbd3", 00:16:02.159 "bdev_name": "nvme1n1" 00:16:02.159 }, 00:16:02.159 { 00:16:02.159 "nbd_device": "/dev/nbd4", 00:16:02.159 "bdev_name": "nvme2n1" 00:16:02.159 }, 00:16:02.159 { 00:16:02.159 "nbd_device": "/dev/nbd5", 00:16:02.159 "bdev_name": "nvme3n1" 00:16:02.159 } 00:16:02.159 ]' 00:16:02.159 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:16:02.159 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:16:02.159 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:02.159 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:16:02.159 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:02.159 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:02.159 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.159 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:02.159 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:02.159 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:02.159 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:02.159 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.159 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.159 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:02.420 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:02.420 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.420 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.420 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:02.420 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:02.420 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:02.420 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:02.420 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.420 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.420 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:02.420 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:02.420 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.420 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.420 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:16:02.681 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:16:02.681 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:16:02.681 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:16:02.681 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.681 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.681 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:16:02.681 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:02.681 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.681 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.681 17:02:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:16:02.942 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:16:02.942 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:16:02.942 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:16:02.942 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:02.942 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:02.942 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:16:02.942 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:02.942 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:02.942 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:02.942 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:16:03.202 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:16:03.202 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:16:03.202 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:16:03.202 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:03.202 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:03.202 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:16:03.202 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:03.202 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:03.202 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:03.202 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:16:03.202 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:16:03.202 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:16:03.202 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:16:03.202 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:03.202 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:03.202 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:16:03.202 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:03.202 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:03.464 17:02:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:16:03.725 /dev/nbd0 00:16:03.725 17:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:03.725 17:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:03.725 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:03.725 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:03.725 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:03.725 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:03.725 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:03.725 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:03.725 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:03.725 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:03.725 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:03.725 1+0 records in 00:16:03.725 1+0 records out 00:16:03.725 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366176 s, 11.2 MB/s 00:16:03.725 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.725 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:03.725 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.725 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:03.725 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:03.725 17:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:03.725 17:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:03.725 17:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:16:03.987 /dev/nbd1 00:16:03.987 17:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:03.987 17:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:03.987 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:03.987 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:03.987 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:03.987 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:03.987 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:03.987 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:03.987 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:03.987 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:03.987 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:03.987 1+0 records in 00:16:03.987 1+0 records out 00:16:03.987 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467617 s, 8.8 MB/s 00:16:03.987 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.987 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:03.987 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:03.987 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:03.987 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:03.987 17:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:03.987 17:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:03.987 17:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:16:04.248 /dev/nbd10 00:16:04.248 17:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:16:04.248 17:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:16:04.248 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:16:04.248 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:04.248 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:04.248 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:04.248 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:16:04.248 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:04.248 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:04.248 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:04.248 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:04.248 1+0 records in 00:16:04.248 1+0 records out 00:16:04.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048012 s, 8.5 MB/s 00:16:04.248 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.248 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:04.248 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.248 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:04.248 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:04.248 17:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:04.248 17:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:04.248 17:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:16:04.508 /dev/nbd11 00:16:04.508 17:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:16:04.508 17:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:16:04.508 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:16:04.508 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:04.508 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:04.508 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:04.508 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:16:04.508 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:04.508 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:04.508 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:04.508 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:04.508 1+0 records in 00:16:04.508 1+0 records out 00:16:04.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548259 s, 7.5 MB/s 00:16:04.508 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.508 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:04.508 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.508 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:04.508 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:04.508 17:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:04.508 17:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:04.508 17:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:16:04.769 /dev/nbd12 00:16:04.769 17:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:16:04.769 17:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:16:04.769 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:16:04.769 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:04.769 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:04.769 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:04.769 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:16:04.769 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:04.769 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:04.769 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:04.769 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:04.769 1+0 records in 00:16:04.769 1+0 records out 00:16:04.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000839033 s, 4.9 MB/s 00:16:04.769 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.769 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:04.769 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.769 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:04.769 17:02:38 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:04.769 17:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:04.769 17:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:04.769 17:02:38 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:16:04.769 /dev/nbd13 00:16:05.030 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:16:05.030 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:16:05.030 17:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:16:05.030 17:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:16:05.030 17:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:05.030 17:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:05.030 17:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:16:05.030 17:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:16:05.030 17:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:05.030 17:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:05.030 17:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:05.030 1+0 records in 00:16:05.030 1+0 records out 00:16:05.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000823969 s, 5.0 MB/s 00:16:05.030 17:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.030 17:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:16:05.030 17:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.030 17:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:05.030 17:02:39 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:16:05.030 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:05.030 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:16:05.030 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:05.030 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:05.030 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:05.030 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:16:05.030 { 00:16:05.030 "nbd_device": "/dev/nbd0", 00:16:05.030 "bdev_name": "nvme0n1" 00:16:05.030 }, 00:16:05.030 { 00:16:05.030 "nbd_device": "/dev/nbd1", 00:16:05.030 "bdev_name": "nvme0n2" 00:16:05.030 }, 00:16:05.030 { 00:16:05.030 "nbd_device": "/dev/nbd10", 00:16:05.030 "bdev_name": "nvme0n3" 00:16:05.030 }, 00:16:05.030 { 00:16:05.030 "nbd_device": "/dev/nbd11", 00:16:05.030 "bdev_name": "nvme1n1" 00:16:05.031 }, 00:16:05.031 { 00:16:05.031 "nbd_device": "/dev/nbd12", 00:16:05.031 "bdev_name": "nvme2n1" 00:16:05.031 }, 00:16:05.031 { 00:16:05.031 "nbd_device": "/dev/nbd13", 00:16:05.031 "bdev_name": "nvme3n1" 00:16:05.031 } 00:16:05.031 ]' 00:16:05.031 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:16:05.031 { 00:16:05.031 "nbd_device": "/dev/nbd0", 00:16:05.031 "bdev_name": "nvme0n1" 00:16:05.031 }, 00:16:05.031 { 00:16:05.031 "nbd_device": "/dev/nbd1", 00:16:05.031 "bdev_name": "nvme0n2" 00:16:05.031 }, 00:16:05.031 { 00:16:05.031 "nbd_device": "/dev/nbd10", 00:16:05.031 "bdev_name": "nvme0n3" 00:16:05.031 }, 00:16:05.031 { 00:16:05.031 "nbd_device": "/dev/nbd11", 00:16:05.031 "bdev_name": "nvme1n1" 00:16:05.031 }, 00:16:05.031 { 00:16:05.031 "nbd_device": "/dev/nbd12", 00:16:05.031 "bdev_name": "nvme2n1" 00:16:05.031 }, 00:16:05.031 { 00:16:05.031 "nbd_device": "/dev/nbd13", 00:16:05.031 "bdev_name": "nvme3n1" 00:16:05.031 } 00:16:05.031 ]' 00:16:05.031 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:05.031 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:16:05.031 /dev/nbd1 00:16:05.031 /dev/nbd10 00:16:05.031 /dev/nbd11 00:16:05.031 /dev/nbd12 00:16:05.031 /dev/nbd13' 00:16:05.031 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:05.031 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:16:05.031 /dev/nbd1 00:16:05.031 /dev/nbd10 00:16:05.031 /dev/nbd11 00:16:05.031 /dev/nbd12 00:16:05.031 /dev/nbd13' 00:16:05.031 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:16:05.031 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:16:05.031 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:16:05.031 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:16:05.031 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:16:05.031 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:05.031 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:05.031 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:16:05.031 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:05.031 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:16:05.031 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:16:05.031 256+0 records in 00:16:05.031 256+0 records out 00:16:05.031 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00713187 s, 147 MB/s 00:16:05.031 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:05.031 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:16:05.292 256+0 records in 00:16:05.292 256+0 records out 00:16:05.292 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.225124 s, 4.7 MB/s 00:16:05.292 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:05.292 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:16:05.552 256+0 records in 00:16:05.552 256+0 records out 00:16:05.552 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.239598 s, 4.4 MB/s 00:16:05.552 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:05.552 17:02:39 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:16:05.845 256+0 records in 00:16:05.845 256+0 records out 00:16:05.845 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154038 s, 6.8 MB/s 00:16:05.845 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:05.845 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:16:06.149 256+0 records in 00:16:06.149 256+0 records out 00:16:06.149 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.21972 s, 4.8 MB/s 00:16:06.149 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:06.149 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:16:06.149 256+0 records in 00:16:06.149 256+0 records out 00:16:06.149 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.100427 s, 10.4 MB/s 00:16:06.149 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:16:06.149 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:16:06.411 256+0 records in 00:16:06.411 256+0 records out 00:16:06.411 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.188304 s, 5.6 MB/s 00:16:06.411 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:16:06.411 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:06.411 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:16:06.411 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:16:06.411 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:06.411 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:16:06.411 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:16:06.411 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:06.411 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:16:06.411 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:06.411 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:16:06.411 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:06.411 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:16:06.411 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:06.411 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:16:06.411 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:06.411 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:16:06.411 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:16:06.411 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:16:06.411 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:16:06.411 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:16:06.411 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:06.411 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:16:06.411 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:06.411 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:06.411 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.411 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:06.672 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:06.672 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:06.672 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:06.672 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.672 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.672 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:06.672 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:06.672 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.672 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.672 17:02:40 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:16:06.672 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:06.672 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:06.672 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:06.672 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.672 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.672 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:06.672 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:06.672 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.672 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.672 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:16:06.933 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:16:06.933 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:16:06.933 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:16:06.933 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.933 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.933 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:16:06.933 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:06.933 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.933 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.933 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:16:07.194 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:16:07.194 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:16:07.194 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:16:07.194 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.194 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.194 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:16:07.194 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:07.194 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.194 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.194 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:16:07.455 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:16:07.455 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:16:07.455 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:16:07.455 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.455 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.455 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:16:07.455 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:07.455 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.455 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:07.455 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:16:07.717 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:16:07.717 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:16:07.717 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:16:07.717 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:07.717 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:07.717 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:16:07.717 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:07.717 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:07.717 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:16:07.717 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:07.717 17:02:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:16:07.717 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:16:07.717 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:16:07.717 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:16:07.978 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:16:07.978 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:16:07.978 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:16:07.978 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:16:07.978 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:16:07.978 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:16:07.978 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:16:07.978 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:16:07.978 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:16:07.978 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:07.978 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:07.978 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:16:07.978 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:16:07.978 malloc_lvol_verify 00:16:07.978 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:16:08.239 c942074e-667e-4d29-a6f1-35e9543350a7 00:16:08.239 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:16:08.498 77e1c37d-3c9a-4fd6-bcda-a9a44d7762da 00:16:08.498 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:16:08.758 /dev/nbd0 00:16:08.758 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:16:08.758 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:16:08.758 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:16:08.758 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:16:08.758 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:16:08.758 mke2fs 1.47.0 (5-Feb-2023) 00:16:08.758 Discarding device blocks: 0/4096 done 00:16:08.758 Creating filesystem with 4096 1k blocks and 1024 inodes 00:16:08.758 00:16:08.758 Allocating group tables: 0/1 done 00:16:08.758 Writing inode tables: 0/1 done 00:16:08.758 Creating journal (1024 blocks): done 00:16:08.758 Writing superblocks and filesystem accounting information: 0/1 done 00:16:08.758 00:16:08.758 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:16:08.758 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:16:08.758 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:08.758 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:08.758 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:16:08.758 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:08.758 17:02:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:16:09.018 17:02:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:09.018 17:02:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:09.018 17:02:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:09.018 17:02:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:09.018 17:02:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:09.018 17:02:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:09.018 17:02:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:16:09.018 17:02:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:16:09.018 17:02:43 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72312 00:16:09.018 17:02:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72312 ']' 00:16:09.018 17:02:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72312 00:16:09.018 17:02:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:16:09.018 17:02:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:09.019 17:02:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72312 00:16:09.019 killing process with pid 72312 00:16:09.019 17:02:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:09.019 17:02:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:09.019 17:02:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72312' 00:16:09.019 17:02:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72312 00:16:09.019 17:02:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72312 00:16:09.956 ************************************ 00:16:09.956 END TEST bdev_nbd 00:16:09.956 ************************************ 00:16:09.956 17:02:44 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:16:09.956 00:16:09.957 real 0m10.356s 00:16:09.957 user 0m14.035s 00:16:09.957 sys 0m3.438s 00:16:09.957 17:02:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.957 17:02:44 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:16:09.957 17:02:44 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:16:09.957 17:02:44 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:16:09.957 17:02:44 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:16:09.957 17:02:44 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:16:09.957 17:02:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:09.957 17:02:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:09.957 17:02:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:09.957 ************************************ 00:16:09.957 START TEST bdev_fio 00:16:09.957 ************************************ 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:16:09.957 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:09.957 ************************************ 00:16:09.957 START TEST bdev_fio_rw_verify 00:16:09.957 ************************************ 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:09.957 17:02:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:09.958 17:02:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:09.958 17:02:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:16:09.958 17:02:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:09.958 17:02:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:16:10.217 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:10.217 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:10.217 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:10.217 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:10.217 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:10.217 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:16:10.217 fio-3.35 00:16:10.217 Starting 6 threads 00:16:22.449 00:16:22.449 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72719: Thu Dec 5 17:02:55 2024 00:16:22.449 read: IOPS=17.1k, BW=66.8MiB/s (70.1MB/s)(669MiB/10002msec) 00:16:22.449 slat (usec): min=2, max=2784, avg= 5.96, stdev=20.21 00:16:22.449 clat (usec): min=69, max=8678, avg=1072.02, stdev=737.58 00:16:22.449 lat (usec): min=72, max=8683, avg=1077.99, stdev=738.42 00:16:22.449 clat percentiles (usec): 00:16:22.449 | 50.000th=[ 898], 99.000th=[ 3490], 99.900th=[ 5014], 99.990th=[ 6259], 00:16:22.449 | 99.999th=[ 8717] 00:16:22.449 write: IOPS=17.5k, BW=68.4MiB/s (71.7MB/s)(684MiB/10002msec); 0 zone resets 00:16:22.449 slat (usec): min=6, max=4401, avg=39.39, stdev=132.20 00:16:22.449 clat (usec): min=63, max=11794, avg=1396.27, stdev=879.49 00:16:22.449 lat (usec): min=78, max=11829, avg=1435.66, stdev=892.77 00:16:22.449 clat percentiles (usec): 00:16:22.449 | 50.000th=[ 1221], 99.000th=[ 4293], 99.900th=[ 6980], 99.990th=[ 9241], 00:16:22.449 | 99.999th=[10552] 00:16:22.449 bw ( KiB/s): min=45271, max=118592, per=100.00%, avg=70798.89, stdev=3406.33, samples=114 00:16:22.449 iops : min=11316, max=29648, avg=17698.84, stdev=851.61, samples=114 00:16:22.449 lat (usec) : 100=0.03%, 250=3.87%, 500=12.60%, 750=16.46%, 1000=14.04% 00:16:22.449 lat (msec) : 2=37.69%, 4=14.40%, 10=0.90%, 20=0.01% 00:16:22.449 cpu : usr=39.71%, sys=34.17%, ctx=6446, majf=0, minf=16575 00:16:22.449 IO depths : 1=10.8%, 2=23.1%, 4=51.6%, 8=14.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:22.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.449 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:22.449 issued rwts: total=171139,175037,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:22.449 latency : target=0, window=0, percentile=100.00%, depth=8 00:16:22.449 00:16:22.449 Run status group 0 (all jobs): 00:16:22.449 READ: bw=66.8MiB/s (70.1MB/s), 66.8MiB/s-66.8MiB/s (70.1MB/s-70.1MB/s), io=669MiB (701MB), run=10002-10002msec 00:16:22.449 WRITE: bw=68.4MiB/s (71.7MB/s), 68.4MiB/s-68.4MiB/s (71.7MB/s-71.7MB/s), io=684MiB (717MB), run=10002-10002msec 00:16:22.449 ----------------------------------------------------- 00:16:22.449 Suppressions used: 00:16:22.449 count bytes template 00:16:22.449 6 48 /usr/src/fio/parse.c 00:16:22.449 3784 363264 /usr/src/fio/iolog.c 00:16:22.449 1 8 libtcmalloc_minimal.so 00:16:22.449 1 904 libcrypto.so 00:16:22.449 ----------------------------------------------------- 00:16:22.449 00:16:22.449 00:16:22.449 real 0m12.074s 00:16:22.449 user 0m25.396s 00:16:22.449 sys 0m20.855s 00:16:22.449 17:02:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:22.449 ************************************ 00:16:22.449 END TEST bdev_fio_rw_verify 00:16:22.449 ************************************ 00:16:22.449 17:02:56 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:16:22.449 17:02:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:16:22.449 17:02:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:22.449 17:02:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:16:22.449 17:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:22.449 17:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:16:22.449 17:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:16:22.449 17:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:16:22.449 17:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:16:22.449 17:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:16:22.449 17:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:16:22.449 17:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:16:22.449 17:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:22.449 17:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:16:22.449 17:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:16:22.449 17:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:16:22.449 17:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:16:22.450 17:02:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "ed945d8f-8d75-4403-ac3c-6015f7053489"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ed945d8f-8d75-4403-ac3c-6015f7053489",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "f15c963e-b491-4b08-8be8-3fd5fd6f15eb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f15c963e-b491-4b08-8be8-3fd5fd6f15eb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "4636bc36-2704-4f5b-a7bb-100a0f74aaee"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4636bc36-2704-4f5b-a7bb-100a0f74aaee",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "d80d1be9-ed61-4753-ace0-77d598da975d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d80d1be9-ed61-4753-ace0-77d598da975d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "1d601018-b7f4-4939-8986-453449100fc7"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "1d601018-b7f4-4939-8986-453449100fc7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "7c24dca5-1ea0-4f70-abba-6bd32f992a43"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "7c24dca5-1ea0-4f70-abba-6bd32f992a43",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:16:22.450 17:02:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:16:22.450 17:02:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:16:22.450 17:02:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:16:22.450 /home/vagrant/spdk_repo/spdk 00:16:22.450 17:02:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:16:22.450 17:02:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:16:22.450 17:02:56 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:16:22.450 ************************************ 00:16:22.450 END TEST bdev_fio 00:16:22.450 ************************************ 00:16:22.450 00:16:22.450 real 0m12.248s 00:16:22.450 user 0m25.468s 00:16:22.450 sys 0m20.937s 00:16:22.450 17:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:22.450 17:02:56 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:16:22.450 17:02:56 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:22.450 17:02:56 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:22.450 17:02:56 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:22.450 17:02:56 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:22.450 17:02:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:22.450 ************************************ 00:16:22.450 START TEST bdev_verify 00:16:22.450 ************************************ 00:16:22.450 17:02:56 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:16:22.450 [2024-12-05 17:02:56.466235] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:22.450 [2024-12-05 17:02:56.466380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72892 ] 00:16:22.450 [2024-12-05 17:02:56.627318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:22.450 [2024-12-05 17:02:56.748469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.450 [2024-12-05 17:02:56.748561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.020 Running I/O for 5 seconds... 00:16:24.979 24096.00 IOPS, 94.12 MiB/s [2024-12-05T17:03:00.732Z] 23104.00 IOPS, 90.25 MiB/s [2024-12-05T17:03:01.675Z] 23381.00 IOPS, 91.33 MiB/s [2024-12-05T17:03:02.620Z] 23239.75 IOPS, 90.78 MiB/s [2024-12-05T17:03:02.620Z] 22572.60 IOPS, 88.17 MiB/s 00:16:28.253 Latency(us) 00:16:28.253 [2024-12-05T17:03:02.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:28.253 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:28.253 Verification LBA range: start 0x0 length 0x80000 00:16:28.253 nvme0n1 : 5.07 1793.46 7.01 0.00 0.00 71242.12 9124.63 71383.83 00:16:28.253 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:28.253 Verification LBA range: start 0x80000 length 0x80000 00:16:28.253 nvme0n1 : 5.06 1771.01 6.92 0.00 0.00 72142.62 10989.88 68964.04 00:16:28.253 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:28.253 Verification LBA range: start 0x0 length 0x80000 00:16:28.253 nvme0n2 : 5.07 1792.85 7.00 0.00 0.00 71132.30 7914.73 64931.05 00:16:28.253 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:28.253 Verification LBA range: start 0x80000 length 0x80000 00:16:28.253 nvme0n2 : 5.06 1770.08 6.91 0.00 0.00 72042.73 13510.50 63721.16 00:16:28.253 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:28.253 Verification LBA range: start 0x0 length 0x80000 00:16:28.253 nvme0n3 : 5.06 1796.50 7.02 0.00 0.00 70847.20 6856.07 69367.34 00:16:28.253 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:28.253 Verification LBA range: start 0x80000 length 0x80000 00:16:28.253 nvme0n3 : 5.06 1769.56 6.91 0.00 0.00 71919.28 10082.46 63317.86 00:16:28.253 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:28.253 Verification LBA range: start 0x0 length 0xbd0bd 00:16:28.253 nvme1n1 : 5.09 2387.27 9.33 0.00 0.00 52959.16 652.21 61301.37 00:16:28.253 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:28.253 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:28.253 nvme1n1 : 5.10 2378.81 9.29 0.00 0.00 53141.04 269.39 108083.99 00:16:28.253 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:28.253 Verification LBA range: start 0x0 length 0xa0000 00:16:28.253 nvme2n1 : 5.05 1772.98 6.93 0.00 0.00 71534.93 9931.22 76223.41 00:16:28.253 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:28.253 Verification LBA range: start 0xa0000 length 0xa0000 00:16:28.253 nvme2n1 : 5.07 1666.36 6.51 0.00 0.00 75911.33 8318.03 109697.18 00:16:28.253 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:28.253 Verification LBA range: start 0x0 length 0x20000 00:16:28.253 nvme3n1 : 5.07 1794.12 7.01 0.00 0.00 70544.13 4411.08 70577.23 00:16:28.253 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:28.253 Verification LBA range: start 0x20000 length 0x20000 00:16:28.253 nvme3n1 : 5.07 1791.88 7.00 0.00 0.00 70505.86 8670.92 69367.34 00:16:28.253 [2024-12-05T17:03:02.620Z] =================================================================================================================== 00:16:28.253 [2024-12-05T17:03:02.620Z] Total : 22484.89 87.83 0.00 0.00 67768.23 269.39 109697.18 00:16:28.827 00:16:28.827 real 0m6.760s 00:16:28.827 user 0m10.832s 00:16:28.827 sys 0m1.566s 00:16:28.827 ************************************ 00:16:28.827 17:03:03 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.827 17:03:03 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:28.827 END TEST bdev_verify 00:16:28.827 ************************************ 00:16:29.089 17:03:03 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:29.089 17:03:03 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:29.089 17:03:03 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:29.089 17:03:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:29.089 ************************************ 00:16:29.089 START TEST bdev_verify_big_io 00:16:29.089 ************************************ 00:16:29.089 17:03:03 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:29.089 [2024-12-05 17:03:03.304428] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:29.089 [2024-12-05 17:03:03.304567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72993 ] 00:16:29.350 [2024-12-05 17:03:03.468352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:29.350 [2024-12-05 17:03:03.592710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.350 [2024-12-05 17:03:03.592729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.922 Running I/O for 5 seconds... 00:16:35.472 2672.00 IOPS, 167.00 MiB/s [2024-12-05T17:03:10.782Z] 3029.00 IOPS, 189.31 MiB/s 00:16:36.415 Latency(us) 00:16:36.415 [2024-12-05T17:03:10.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.415 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:36.415 Verification LBA range: start 0x0 length 0x8000 00:16:36.415 nvme0n1 : 6.07 84.38 5.27 0.00 0.00 1472637.24 116149.96 1858399.31 00:16:36.415 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:36.415 Verification LBA range: start 0x8000 length 0x8000 00:16:36.415 nvme0n1 : 5.94 129.24 8.08 0.00 0.00 948544.07 6654.42 948557.98 00:16:36.415 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:36.415 Verification LBA range: start 0x0 length 0x8000 00:16:36.415 nvme0n2 : 6.07 84.34 5.27 0.00 0.00 1390700.53 6049.48 1516402.22 00:16:36.415 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:36.415 Verification LBA range: start 0x8000 length 0x8000 00:16:36.415 nvme0n2 : 5.94 129.19 8.07 0.00 0.00 922211.51 96388.33 1090519.04 00:16:36.415 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:36.415 Verification LBA range: start 0x0 length 0x8000 00:16:36.415 nvme0n3 : 6.10 52.49 3.28 0.00 0.00 2111251.69 21878.94 2477865.75 00:16:36.415 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:36.415 Verification LBA range: start 0x8000 length 0x8000 00:16:36.415 nvme0n3 : 5.95 118.37 7.40 0.00 0.00 978468.52 81062.99 1271196.75 00:16:36.415 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:36.415 Verification LBA range: start 0x0 length 0xbd0b 00:16:36.415 nvme1n1 : 6.10 94.44 5.90 0.00 0.00 1113064.72 24802.86 1161499.57 00:16:36.415 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:36.415 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:36.415 nvme1n1 : 6.06 158.44 9.90 0.00 0.00 733251.64 2281.16 967916.31 00:16:36.415 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:36.415 Verification LBA range: start 0x0 length 0xa000 00:16:36.415 nvme2n1 : 6.25 112.68 7.04 0.00 0.00 883609.27 1014.55 2387526.89 00:16:36.415 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:36.415 Verification LBA range: start 0xa000 length 0xa000 00:16:36.415 nvme2n1 : 6.05 123.26 7.70 0.00 0.00 907152.04 98001.53 2258471.38 00:16:36.415 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:36.415 Verification LBA range: start 0x0 length 0x2000 00:16:36.415 nvme3n1 : 6.56 323.40 20.21 0.00 0.00 292453.76 387.54 1768060.46 00:16:36.415 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:36.415 Verification LBA range: start 0x2000 length 0x2000 00:16:36.415 nvme3n1 : 6.05 121.65 7.60 0.00 0.00 888294.59 16333.59 1432516.14 00:16:36.415 [2024-12-05T17:03:10.782Z] =================================================================================================================== 00:16:36.415 [2024-12-05T17:03:10.782Z] Total : 1531.89 95.74 0.00 0.00 868326.08 387.54 2477865.75 00:16:37.354 00:16:37.354 real 0m8.174s 00:16:37.354 user 0m15.070s 00:16:37.354 sys 0m0.435s 00:16:37.354 17:03:11 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:37.354 ************************************ 00:16:37.354 END TEST bdev_verify_big_io 00:16:37.354 ************************************ 00:16:37.354 17:03:11 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.354 17:03:11 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:37.354 17:03:11 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:37.354 17:03:11 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:37.354 17:03:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:37.354 ************************************ 00:16:37.354 START TEST bdev_write_zeroes 00:16:37.354 ************************************ 00:16:37.354 17:03:11 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:37.354 [2024-12-05 17:03:11.531154] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:37.354 [2024-12-05 17:03:11.531263] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73103 ] 00:16:37.354 [2024-12-05 17:03:11.691723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.615 [2024-12-05 17:03:11.787421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.876 Running I/O for 1 seconds... 00:16:39.262 83264.00 IOPS, 325.25 MiB/s 00:16:39.262 Latency(us) 00:16:39.262 [2024-12-05T17:03:13.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.262 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:39.262 nvme0n1 : 1.02 13309.97 51.99 0.00 0.00 9607.08 5419.32 19660.80 00:16:39.262 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:39.262 nvme0n2 : 1.01 13247.51 51.75 0.00 0.00 9644.62 5394.12 19257.50 00:16:39.262 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:39.262 nvme0n3 : 1.02 13231.38 51.69 0.00 0.00 9649.46 5469.74 20669.05 00:16:39.262 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:39.262 nvme1n1 : 1.02 16578.63 64.76 0.00 0.00 7692.18 4335.46 15627.82 00:16:39.262 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:39.262 nvme2n1 : 1.02 13214.92 51.62 0.00 0.00 9600.67 3780.92 23492.14 00:16:39.262 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:39.262 nvme3n1 : 1.02 13199.52 51.56 0.00 0.00 9604.00 3831.34 22887.19 00:16:39.262 [2024-12-05T17:03:13.629Z] =================================================================================================================== 00:16:39.262 [2024-12-05T17:03:13.629Z] Total : 82781.93 323.37 0.00 0.00 9233.00 3780.92 23492.14 00:16:39.835 00:16:39.835 real 0m2.530s 00:16:39.835 user 0m1.861s 00:16:39.835 sys 0m0.474s 00:16:39.835 17:03:13 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.835 ************************************ 00:16:39.835 END TEST bdev_write_zeroes 00:16:39.835 ************************************ 00:16:39.835 17:03:13 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:39.835 17:03:14 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:39.835 17:03:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:39.835 17:03:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:39.835 17:03:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:39.835 ************************************ 00:16:39.835 START TEST bdev_json_nonenclosed 00:16:39.835 ************************************ 00:16:39.835 17:03:14 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:39.835 [2024-12-05 17:03:14.132747] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:39.835 [2024-12-05 17:03:14.132883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73156 ] 00:16:40.097 [2024-12-05 17:03:14.290216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.097 [2024-12-05 17:03:14.407759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.097 [2024-12-05 17:03:14.407855] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:40.097 [2024-12-05 17:03:14.407874] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:40.097 [2024-12-05 17:03:14.407884] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:40.357 00:16:40.357 real 0m0.540s 00:16:40.357 user 0m0.328s 00:16:40.357 sys 0m0.107s 00:16:40.357 17:03:14 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.357 ************************************ 00:16:40.357 END TEST bdev_json_nonenclosed 00:16:40.357 ************************************ 00:16:40.357 17:03:14 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:40.357 17:03:14 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:40.357 17:03:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:40.357 17:03:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.357 17:03:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:40.357 ************************************ 00:16:40.357 START TEST bdev_json_nonarray 00:16:40.357 ************************************ 00:16:40.357 17:03:14 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:40.618 [2024-12-05 17:03:14.742396] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:40.618 [2024-12-05 17:03:14.742536] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73176 ] 00:16:40.618 [2024-12-05 17:03:14.904783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.878 [2024-12-05 17:03:15.028742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.878 [2024-12-05 17:03:15.028853] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:40.878 [2024-12-05 17:03:15.028874] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:40.878 [2024-12-05 17:03:15.028884] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:40.878 ************************************ 00:16:40.878 END TEST bdev_json_nonarray 00:16:40.878 ************************************ 00:16:40.878 00:16:40.878 real 0m0.549s 00:16:40.878 user 0m0.334s 00:16:40.878 sys 0m0.108s 00:16:40.878 17:03:15 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.878 17:03:15 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:41.138 17:03:15 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:16:41.138 17:03:15 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:16:41.138 17:03:15 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:16:41.138 17:03:15 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:16:41.138 17:03:15 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:16:41.138 17:03:15 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:41.138 17:03:15 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:41.138 17:03:15 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:41.139 17:03:15 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:41.139 17:03:15 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:41.139 17:03:15 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:41.139 17:03:15 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:41.398 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:51.400 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:51.400 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:51.400 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:51.400 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:51.400 00:16:51.400 real 1m0.406s 00:16:51.400 user 1m20.899s 00:16:51.400 sys 0m46.653s 00:16:51.400 17:03:25 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:51.400 ************************************ 00:16:51.400 END TEST blockdev_xnvme 00:16:51.400 ************************************ 00:16:51.400 17:03:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:51.400 17:03:25 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:51.400 17:03:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:51.400 17:03:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:51.400 17:03:25 -- common/autotest_common.sh@10 -- # set +x 00:16:51.400 ************************************ 00:16:51.400 START TEST ublk 00:16:51.400 ************************************ 00:16:51.400 17:03:25 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:51.400 * Looking for test storage... 00:16:51.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:51.400 17:03:25 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:51.400 17:03:25 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:16:51.400 17:03:25 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:51.400 17:03:25 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:51.400 17:03:25 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:51.400 17:03:25 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:51.400 17:03:25 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:51.400 17:03:25 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:51.400 17:03:25 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:51.400 17:03:25 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:51.400 17:03:25 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:51.400 17:03:25 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:51.400 17:03:25 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:51.400 17:03:25 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:51.400 17:03:25 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:51.400 17:03:25 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:51.400 17:03:25 ublk -- scripts/common.sh@345 -- # : 1 00:16:51.400 17:03:25 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:51.400 17:03:25 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:51.400 17:03:25 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:51.400 17:03:25 ublk -- scripts/common.sh@353 -- # local d=1 00:16:51.400 17:03:25 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:51.400 17:03:25 ublk -- scripts/common.sh@355 -- # echo 1 00:16:51.400 17:03:25 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:51.400 17:03:25 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:51.400 17:03:25 ublk -- scripts/common.sh@353 -- # local d=2 00:16:51.400 17:03:25 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:51.400 17:03:25 ublk -- scripts/common.sh@355 -- # echo 2 00:16:51.400 17:03:25 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:51.400 17:03:25 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:51.400 17:03:25 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:51.400 17:03:25 ublk -- scripts/common.sh@368 -- # return 0 00:16:51.400 17:03:25 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:51.400 17:03:25 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:51.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.400 --rc genhtml_branch_coverage=1 00:16:51.400 --rc genhtml_function_coverage=1 00:16:51.400 --rc genhtml_legend=1 00:16:51.400 --rc geninfo_all_blocks=1 00:16:51.400 --rc geninfo_unexecuted_blocks=1 00:16:51.400 00:16:51.400 ' 00:16:51.400 17:03:25 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:51.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.400 --rc genhtml_branch_coverage=1 00:16:51.400 --rc genhtml_function_coverage=1 00:16:51.400 --rc genhtml_legend=1 00:16:51.400 --rc geninfo_all_blocks=1 00:16:51.400 --rc geninfo_unexecuted_blocks=1 00:16:51.400 00:16:51.400 ' 00:16:51.400 17:03:25 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:51.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.400 --rc genhtml_branch_coverage=1 00:16:51.400 --rc genhtml_function_coverage=1 00:16:51.400 --rc genhtml_legend=1 00:16:51.400 --rc geninfo_all_blocks=1 00:16:51.400 --rc geninfo_unexecuted_blocks=1 00:16:51.400 00:16:51.400 ' 00:16:51.400 17:03:25 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:51.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.400 --rc genhtml_branch_coverage=1 00:16:51.400 --rc genhtml_function_coverage=1 00:16:51.400 --rc genhtml_legend=1 00:16:51.400 --rc geninfo_all_blocks=1 00:16:51.400 --rc geninfo_unexecuted_blocks=1 00:16:51.400 00:16:51.400 ' 00:16:51.400 17:03:25 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:51.400 17:03:25 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:51.400 17:03:25 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:51.400 17:03:25 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:51.400 17:03:25 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:51.400 17:03:25 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:51.400 17:03:25 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:51.400 17:03:25 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:51.400 17:03:25 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:51.400 17:03:25 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:51.400 17:03:25 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:51.400 17:03:25 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:51.400 17:03:25 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:51.400 17:03:25 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:51.400 17:03:25 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:51.400 17:03:25 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:51.400 17:03:25 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:51.400 17:03:25 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:51.400 17:03:25 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:51.400 17:03:25 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:51.400 17:03:25 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:51.400 17:03:25 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:51.400 17:03:25 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:51.400 ************************************ 00:16:51.400 START TEST test_save_ublk_config 00:16:51.400 ************************************ 00:16:51.400 17:03:25 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:51.401 17:03:25 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:51.401 17:03:25 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73478 00:16:51.401 17:03:25 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:51.401 17:03:25 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:51.401 17:03:25 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73478 00:16:51.401 17:03:25 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73478 ']' 00:16:51.401 17:03:25 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.401 17:03:25 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.401 17:03:25 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.401 17:03:25 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.401 17:03:25 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:51.401 [2024-12-05 17:03:25.720229] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:51.401 [2024-12-05 17:03:25.720388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73478 ] 00:16:51.662 [2024-12-05 17:03:25.886690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.662 [2024-12-05 17:03:26.007850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.650 17:03:26 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.650 17:03:26 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:52.650 17:03:26 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:52.650 17:03:26 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:52.650 17:03:26 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.650 17:03:26 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:52.650 [2024-12-05 17:03:26.731975] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:52.650 [2024-12-05 17:03:26.732873] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:52.650 malloc0 00:16:52.650 [2024-12-05 17:03:26.804114] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:52.650 [2024-12-05 17:03:26.804210] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:52.650 [2024-12-05 17:03:26.804221] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:52.650 [2024-12-05 17:03:26.804229] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:52.650 [2024-12-05 17:03:26.813090] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:52.650 [2024-12-05 17:03:26.813121] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:52.650 [2024-12-05 17:03:26.819987] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:52.650 [2024-12-05 17:03:26.820103] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:52.650 [2024-12-05 17:03:26.836983] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:52.650 0 00:16:52.650 17:03:26 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.650 17:03:26 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:52.650 17:03:26 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.650 17:03:26 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:52.936 17:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.936 17:03:27 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:52.936 "subsystems": [ 00:16:52.936 { 00:16:52.936 "subsystem": "fsdev", 00:16:52.936 "config": [ 00:16:52.936 { 00:16:52.936 "method": "fsdev_set_opts", 00:16:52.936 "params": { 00:16:52.936 "fsdev_io_pool_size": 65535, 00:16:52.936 "fsdev_io_cache_size": 256 00:16:52.936 } 00:16:52.936 } 00:16:52.936 ] 00:16:52.936 }, 00:16:52.936 { 00:16:52.936 "subsystem": "keyring", 00:16:52.936 "config": [] 00:16:52.936 }, 00:16:52.936 { 00:16:52.936 "subsystem": "iobuf", 00:16:52.936 "config": [ 00:16:52.936 { 00:16:52.936 "method": "iobuf_set_options", 00:16:52.936 "params": { 00:16:52.936 "small_pool_count": 8192, 00:16:52.936 "large_pool_count": 1024, 00:16:52.936 "small_bufsize": 8192, 00:16:52.936 "large_bufsize": 135168, 00:16:52.936 "enable_numa": false 00:16:52.936 } 00:16:52.936 } 00:16:52.936 ] 00:16:52.936 }, 00:16:52.936 { 00:16:52.936 "subsystem": "sock", 00:16:52.936 "config": [ 00:16:52.936 { 00:16:52.936 "method": "sock_set_default_impl", 00:16:52.936 "params": { 00:16:52.936 "impl_name": "posix" 00:16:52.936 } 00:16:52.936 }, 00:16:52.936 { 00:16:52.937 "method": "sock_impl_set_options", 00:16:52.937 "params": { 00:16:52.937 "impl_name": "ssl", 00:16:52.937 "recv_buf_size": 4096, 00:16:52.937 "send_buf_size": 4096, 00:16:52.937 "enable_recv_pipe": true, 00:16:52.937 "enable_quickack": false, 00:16:52.937 "enable_placement_id": 0, 00:16:52.937 "enable_zerocopy_send_server": true, 00:16:52.937 "enable_zerocopy_send_client": false, 00:16:52.937 "zerocopy_threshold": 0, 00:16:52.937 "tls_version": 0, 00:16:52.937 "enable_ktls": false 00:16:52.937 } 00:16:52.937 }, 00:16:52.937 { 00:16:52.937 "method": "sock_impl_set_options", 00:16:52.937 "params": { 00:16:52.937 "impl_name": "posix", 00:16:52.937 "recv_buf_size": 2097152, 00:16:52.937 "send_buf_size": 2097152, 00:16:52.937 "enable_recv_pipe": true, 00:16:52.937 "enable_quickack": false, 00:16:52.937 "enable_placement_id": 0, 00:16:52.937 "enable_zerocopy_send_server": true, 00:16:52.937 "enable_zerocopy_send_client": false, 00:16:52.937 "zerocopy_threshold": 0, 00:16:52.937 "tls_version": 0, 00:16:52.937 "enable_ktls": false 00:16:52.937 } 00:16:52.937 } 00:16:52.937 ] 00:16:52.937 }, 00:16:52.937 { 00:16:52.937 "subsystem": "vmd", 00:16:52.937 "config": [] 00:16:52.937 }, 00:16:52.937 { 00:16:52.937 "subsystem": "accel", 00:16:52.937 "config": [ 00:16:52.937 { 00:16:52.937 "method": "accel_set_options", 00:16:52.937 "params": { 00:16:52.937 "small_cache_size": 128, 00:16:52.937 "large_cache_size": 16, 00:16:52.937 "task_count": 2048, 00:16:52.937 "sequence_count": 2048, 00:16:52.937 "buf_count": 2048 00:16:52.937 } 00:16:52.937 } 00:16:52.937 ] 00:16:52.937 }, 00:16:52.937 { 00:16:52.937 "subsystem": "bdev", 00:16:52.937 "config": [ 00:16:52.937 { 00:16:52.937 "method": "bdev_set_options", 00:16:52.937 "params": { 00:16:52.937 "bdev_io_pool_size": 65535, 00:16:52.937 "bdev_io_cache_size": 256, 00:16:52.937 "bdev_auto_examine": true, 00:16:52.937 "iobuf_small_cache_size": 128, 00:16:52.937 "iobuf_large_cache_size": 16 00:16:52.937 } 00:16:52.937 }, 00:16:52.937 { 00:16:52.937 "method": "bdev_raid_set_options", 00:16:52.937 "params": { 00:16:52.937 "process_window_size_kb": 1024, 00:16:52.937 "process_max_bandwidth_mb_sec": 0 00:16:52.937 } 00:16:52.937 }, 00:16:52.937 { 00:16:52.937 "method": "bdev_iscsi_set_options", 00:16:52.937 "params": { 00:16:52.937 "timeout_sec": 30 00:16:52.937 } 00:16:52.937 }, 00:16:52.937 { 00:16:52.937 "method": "bdev_nvme_set_options", 00:16:52.937 "params": { 00:16:52.937 "action_on_timeout": "none", 00:16:52.937 "timeout_us": 0, 00:16:52.937 "timeout_admin_us": 0, 00:16:52.937 "keep_alive_timeout_ms": 10000, 00:16:52.937 "arbitration_burst": 0, 00:16:52.937 "low_priority_weight": 0, 00:16:52.937 "medium_priority_weight": 0, 00:16:52.937 "high_priority_weight": 0, 00:16:52.937 "nvme_adminq_poll_period_us": 10000, 00:16:52.937 "nvme_ioq_poll_period_us": 0, 00:16:52.937 "io_queue_requests": 0, 00:16:52.937 "delay_cmd_submit": true, 00:16:52.937 "transport_retry_count": 4, 00:16:52.937 "bdev_retry_count": 3, 00:16:52.937 "transport_ack_timeout": 0, 00:16:52.937 "ctrlr_loss_timeout_sec": 0, 00:16:52.937 "reconnect_delay_sec": 0, 00:16:52.937 "fast_io_fail_timeout_sec": 0, 00:16:52.937 "disable_auto_failback": false, 00:16:52.937 "generate_uuids": false, 00:16:52.937 "transport_tos": 0, 00:16:52.937 "nvme_error_stat": false, 00:16:52.937 "rdma_srq_size": 0, 00:16:52.937 "io_path_stat": false, 00:16:52.937 "allow_accel_sequence": false, 00:16:52.937 "rdma_max_cq_size": 0, 00:16:52.937 "rdma_cm_event_timeout_ms": 0, 00:16:52.937 "dhchap_digests": [ 00:16:52.937 "sha256", 00:16:52.937 "sha384", 00:16:52.937 "sha512" 00:16:52.937 ], 00:16:52.937 "dhchap_dhgroups": [ 00:16:52.937 "null", 00:16:52.937 "ffdhe2048", 00:16:52.937 "ffdhe3072", 00:16:52.937 "ffdhe4096", 00:16:52.937 "ffdhe6144", 00:16:52.937 "ffdhe8192" 00:16:52.937 ] 00:16:52.937 } 00:16:52.937 }, 00:16:52.937 { 00:16:52.937 "method": "bdev_nvme_set_hotplug", 00:16:52.937 "params": { 00:16:52.937 "period_us": 100000, 00:16:52.937 "enable": false 00:16:52.937 } 00:16:52.937 }, 00:16:52.937 { 00:16:52.937 "method": "bdev_malloc_create", 00:16:52.937 "params": { 00:16:52.937 "name": "malloc0", 00:16:52.937 "num_blocks": 8192, 00:16:52.937 "block_size": 4096, 00:16:52.937 "physical_block_size": 4096, 00:16:52.937 "uuid": "619fc1fe-8291-4a96-8232-8f68e65b957e", 00:16:52.937 "optimal_io_boundary": 0, 00:16:52.937 "md_size": 0, 00:16:52.937 "dif_type": 0, 00:16:52.937 "dif_is_head_of_md": false, 00:16:52.937 "dif_pi_format": 0 00:16:52.937 } 00:16:52.937 }, 00:16:52.937 { 00:16:52.937 "method": "bdev_wait_for_examine" 00:16:52.937 } 00:16:52.937 ] 00:16:52.937 }, 00:16:52.937 { 00:16:52.937 "subsystem": "scsi", 00:16:52.937 "config": null 00:16:52.937 }, 00:16:52.937 { 00:16:52.937 "subsystem": "scheduler", 00:16:52.937 "config": [ 00:16:52.937 { 00:16:52.937 "method": "framework_set_scheduler", 00:16:52.937 "params": { 00:16:52.937 "name": "static" 00:16:52.937 } 00:16:52.937 } 00:16:52.937 ] 00:16:52.937 }, 00:16:52.937 { 00:16:52.937 "subsystem": "vhost_scsi", 00:16:52.937 "config": [] 00:16:52.937 }, 00:16:52.937 { 00:16:52.937 "subsystem": "vhost_blk", 00:16:52.937 "config": [] 00:16:52.937 }, 00:16:52.937 { 00:16:52.937 "subsystem": "ublk", 00:16:52.937 "config": [ 00:16:52.937 { 00:16:52.937 "method": "ublk_create_target", 00:16:52.937 "params": { 00:16:52.937 "cpumask": "1" 00:16:52.937 } 00:16:52.937 }, 00:16:52.937 { 00:16:52.937 "method": "ublk_start_disk", 00:16:52.937 "params": { 00:16:52.937 "bdev_name": "malloc0", 00:16:52.937 "ublk_id": 0, 00:16:52.937 "num_queues": 1, 00:16:52.937 "queue_depth": 128 00:16:52.937 } 00:16:52.937 } 00:16:52.937 ] 00:16:52.937 }, 00:16:52.937 { 00:16:52.937 "subsystem": "nbd", 00:16:52.937 "config": [] 00:16:52.937 }, 00:16:52.937 { 00:16:52.937 "subsystem": "nvmf", 00:16:52.937 "config": [ 00:16:52.937 { 00:16:52.937 "method": "nvmf_set_config", 00:16:52.937 "params": { 00:16:52.937 "discovery_filter": "match_any", 00:16:52.937 "admin_cmd_passthru": { 00:16:52.937 "identify_ctrlr": false 00:16:52.937 }, 00:16:52.937 "dhchap_digests": [ 00:16:52.937 "sha256", 00:16:52.937 "sha384", 00:16:52.937 "sha512" 00:16:52.937 ], 00:16:52.937 "dhchap_dhgroups": [ 00:16:52.937 "null", 00:16:52.937 "ffdhe2048", 00:16:52.937 "ffdhe3072", 00:16:52.937 "ffdhe4096", 00:16:52.937 "ffdhe6144", 00:16:52.937 "ffdhe8192" 00:16:52.937 ] 00:16:52.937 } 00:16:52.937 }, 00:16:52.937 { 00:16:52.937 "method": "nvmf_set_max_subsystems", 00:16:52.937 "params": { 00:16:52.937 "max_subsystems": 1024 00:16:52.937 } 00:16:52.937 }, 00:16:52.937 { 00:16:52.937 "method": "nvmf_set_crdt", 00:16:52.937 "params": { 00:16:52.937 "crdt1": 0, 00:16:52.937 "crdt2": 0, 00:16:52.937 "crdt3": 0 00:16:52.937 } 00:16:52.937 } 00:16:52.937 ] 00:16:52.937 }, 00:16:52.937 { 00:16:52.937 "subsystem": "iscsi", 00:16:52.937 "config": [ 00:16:52.937 { 00:16:52.937 "method": "iscsi_set_options", 00:16:52.937 "params": { 00:16:52.937 "node_base": "iqn.2016-06.io.spdk", 00:16:52.937 "max_sessions": 128, 00:16:52.937 "max_connections_per_session": 2, 00:16:52.937 "max_queue_depth": 64, 00:16:52.937 "default_time2wait": 2, 00:16:52.937 "default_time2retain": 20, 00:16:52.937 "first_burst_length": 8192, 00:16:52.937 "immediate_data": true, 00:16:52.937 "allow_duplicated_isid": false, 00:16:52.937 "error_recovery_level": 0, 00:16:52.937 "nop_timeout": 60, 00:16:52.937 "nop_in_interval": 30, 00:16:52.937 "disable_chap": false, 00:16:52.937 "require_chap": false, 00:16:52.937 "mutual_chap": false, 00:16:52.937 "chap_group": 0, 00:16:52.937 "max_large_datain_per_connection": 64, 00:16:52.937 "max_r2t_per_connection": 4, 00:16:52.937 "pdu_pool_size": 36864, 00:16:52.937 "immediate_data_pool_size": 16384, 00:16:52.937 "data_out_pool_size": 2048 00:16:52.937 } 00:16:52.937 } 00:16:52.937 ] 00:16:52.937 } 00:16:52.937 ] 00:16:52.937 }' 00:16:52.937 17:03:27 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73478 00:16:52.937 17:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73478 ']' 00:16:52.937 17:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73478 00:16:52.937 17:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:52.937 17:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:52.937 17:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73478 00:16:52.937 17:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:52.938 killing process with pid 73478 00:16:52.938 17:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:52.938 17:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73478' 00:16:52.938 17:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73478 00:16:52.938 17:03:27 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73478 00:16:54.324 [2024-12-05 17:03:28.260136] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:54.324 [2024-12-05 17:03:28.292095] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:54.324 [2024-12-05 17:03:28.292238] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:54.324 [2024-12-05 17:03:28.299184] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:54.324 [2024-12-05 17:03:28.299246] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:54.324 [2024-12-05 17:03:28.299260] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:54.324 [2024-12-05 17:03:28.299291] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:54.324 [2024-12-05 17:03:28.299446] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:55.709 17:03:29 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73538 00:16:55.709 17:03:29 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73538 00:16:55.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.709 17:03:29 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73538 ']' 00:16:55.709 17:03:29 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.709 17:03:29 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.709 17:03:29 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:55.709 17:03:29 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.709 17:03:29 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.709 17:03:29 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:55.709 17:03:29 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:55.709 "subsystems": [ 00:16:55.709 { 00:16:55.709 "subsystem": "fsdev", 00:16:55.709 "config": [ 00:16:55.709 { 00:16:55.709 "method": "fsdev_set_opts", 00:16:55.709 "params": { 00:16:55.709 "fsdev_io_pool_size": 65535, 00:16:55.709 "fsdev_io_cache_size": 256 00:16:55.709 } 00:16:55.709 } 00:16:55.709 ] 00:16:55.709 }, 00:16:55.709 { 00:16:55.709 "subsystem": "keyring", 00:16:55.709 "config": [] 00:16:55.709 }, 00:16:55.709 { 00:16:55.709 "subsystem": "iobuf", 00:16:55.709 "config": [ 00:16:55.709 { 00:16:55.709 "method": "iobuf_set_options", 00:16:55.709 "params": { 00:16:55.709 "small_pool_count": 8192, 00:16:55.709 "large_pool_count": 1024, 00:16:55.709 "small_bufsize": 8192, 00:16:55.709 "large_bufsize": 135168, 00:16:55.709 "enable_numa": false 00:16:55.709 } 00:16:55.709 } 00:16:55.709 ] 00:16:55.709 }, 00:16:55.709 { 00:16:55.709 "subsystem": "sock", 00:16:55.709 "config": [ 00:16:55.709 { 00:16:55.709 "method": "sock_set_default_impl", 00:16:55.709 "params": { 00:16:55.709 "impl_name": "posix" 00:16:55.709 } 00:16:55.709 }, 00:16:55.709 { 00:16:55.709 "method": "sock_impl_set_options", 00:16:55.709 "params": { 00:16:55.709 "impl_name": "ssl", 00:16:55.709 "recv_buf_size": 4096, 00:16:55.709 "send_buf_size": 4096, 00:16:55.709 "enable_recv_pipe": true, 00:16:55.709 "enable_quickack": false, 00:16:55.709 "enable_placement_id": 0, 00:16:55.709 "enable_zerocopy_send_server": true, 00:16:55.709 "enable_zerocopy_send_client": false, 00:16:55.709 "zerocopy_threshold": 0, 00:16:55.709 "tls_version": 0, 00:16:55.709 "enable_ktls": false 00:16:55.709 } 00:16:55.709 }, 00:16:55.709 { 00:16:55.709 "method": "sock_impl_set_options", 00:16:55.709 "params": { 00:16:55.709 "impl_name": "posix", 00:16:55.709 "recv_buf_size": 2097152, 00:16:55.709 "send_buf_size": 2097152, 00:16:55.709 "enable_recv_pipe": true, 00:16:55.709 "enable_quickack": false, 00:16:55.709 "enable_placement_id": 0, 00:16:55.709 "enable_zerocopy_send_server": true, 00:16:55.709 "enable_zerocopy_send_client": false, 00:16:55.709 "zerocopy_threshold": 0, 00:16:55.710 "tls_version": 0, 00:16:55.710 "enable_ktls": false 00:16:55.710 } 00:16:55.710 } 00:16:55.710 ] 00:16:55.710 }, 00:16:55.710 { 00:16:55.710 "subsystem": "vmd", 00:16:55.710 "config": [] 00:16:55.710 }, 00:16:55.710 { 00:16:55.710 "subsystem": "accel", 00:16:55.710 "config": [ 00:16:55.710 { 00:16:55.710 "method": "accel_set_options", 00:16:55.710 "params": { 00:16:55.710 "small_cache_size": 128, 00:16:55.710 "large_cache_size": 16, 00:16:55.710 "task_count": 2048, 00:16:55.710 "sequence_count": 2048, 00:16:55.710 "buf_count": 2048 00:16:55.710 } 00:16:55.710 } 00:16:55.710 ] 00:16:55.710 }, 00:16:55.710 { 00:16:55.710 "subsystem": "bdev", 00:16:55.710 "config": [ 00:16:55.710 { 00:16:55.710 "method": "bdev_set_options", 00:16:55.710 "params": { 00:16:55.710 "bdev_io_pool_size": 65535, 00:16:55.710 "bdev_io_cache_size": 256, 00:16:55.710 "bdev_auto_examine": true, 00:16:55.710 "iobuf_small_cache_size": 128, 00:16:55.710 "iobuf_large_cache_size": 16 00:16:55.710 } 00:16:55.710 }, 00:16:55.710 { 00:16:55.710 "method": "bdev_raid_set_options", 00:16:55.710 "params": { 00:16:55.710 "process_window_size_kb": 1024, 00:16:55.710 "process_max_bandwidth_mb_sec": 0 00:16:55.710 } 00:16:55.710 }, 00:16:55.710 { 00:16:55.710 "method": "bdev_iscsi_set_options", 00:16:55.710 "params": { 00:16:55.710 "timeout_sec": 30 00:16:55.710 } 00:16:55.710 }, 00:16:55.710 { 00:16:55.710 "method": "bdev_nvme_set_options", 00:16:55.710 "params": { 00:16:55.710 "action_on_timeout": "none", 00:16:55.710 "timeout_us": 0, 00:16:55.710 "timeout_admin_us": 0, 00:16:55.710 "keep_alive_timeout_ms": 10000, 00:16:55.710 "arbitration_burst": 0, 00:16:55.710 "low_priority_weight": 0, 00:16:55.710 "medium_priority_weight": 0, 00:16:55.710 "high_priority_weight": 0, 00:16:55.710 "nvme_adminq_poll_period_us": 10000, 00:16:55.710 "nvme_ioq_poll_period_us": 0, 00:16:55.710 "io_queue_requests": 0, 00:16:55.710 "delay_cmd_submit": true, 00:16:55.710 "transport_retry_count": 4, 00:16:55.710 "bdev_retry_count": 3, 00:16:55.710 "transport_ack_timeout": 0, 00:16:55.710 "ctrlr_loss_timeout_sec": 0, 00:16:55.710 "reconnect_delay_sec": 0, 00:16:55.710 "fast_io_fail_timeout_sec": 0, 00:16:55.710 "disable_auto_failback": false, 00:16:55.710 "generate_uuids": false, 00:16:55.710 "transport_tos": 0, 00:16:55.710 "nvme_error_stat": false, 00:16:55.710 "rdma_srq_size": 0, 00:16:55.710 "io_path_stat": false, 00:16:55.710 "allow_accel_sequence": false, 00:16:55.710 "rdma_max_cq_size": 0, 00:16:55.710 "rdma_cm_event_timeout_ms": 0, 00:16:55.710 "dhchap_digests": [ 00:16:55.710 "sha256", 00:16:55.710 "sha384", 00:16:55.710 "sha512" 00:16:55.710 ], 00:16:55.710 "dhchap_dhgroups": [ 00:16:55.710 "null", 00:16:55.710 "ffdhe2048", 00:16:55.710 "ffdhe3072", 00:16:55.710 "ffdhe4096", 00:16:55.710 "ffdhe6144", 00:16:55.710 "ffdhe8192" 00:16:55.710 ] 00:16:55.710 } 00:16:55.710 }, 00:16:55.710 { 00:16:55.710 "method": "bdev_nvme_set_hotplug", 00:16:55.710 "params": { 00:16:55.710 "period_us": 100000, 00:16:55.710 "enable": false 00:16:55.710 } 00:16:55.710 }, 00:16:55.710 { 00:16:55.710 "method": "bdev_malloc_create", 00:16:55.710 "params": { 00:16:55.710 "name": "malloc0", 00:16:55.710 "num_blocks": 8192, 00:16:55.710 "block_size": 4096, 00:16:55.710 "physical_block_size": 4096, 00:16:55.710 "uuid": "619fc1fe-8291-4a96-8232-8f68e65b957e", 00:16:55.710 "optimal_io_boundary": 0, 00:16:55.710 "md_size": 0, 00:16:55.710 "dif_type": 0, 00:16:55.710 "dif_is_head_of_md": false, 00:16:55.710 "dif_pi_format": 0 00:16:55.710 } 00:16:55.710 }, 00:16:55.710 { 00:16:55.710 "method": "bdev_wait_for_examine" 00:16:55.710 } 00:16:55.710 ] 00:16:55.710 }, 00:16:55.710 { 00:16:55.710 "subsystem": "scsi", 00:16:55.710 "config": null 00:16:55.710 }, 00:16:55.710 { 00:16:55.710 "subsystem": "scheduler", 00:16:55.710 "config": [ 00:16:55.710 { 00:16:55.710 "method": "framework_set_scheduler", 00:16:55.710 "params": { 00:16:55.710 "name": "static" 00:16:55.710 } 00:16:55.710 } 00:16:55.710 ] 00:16:55.710 }, 00:16:55.710 { 00:16:55.710 "subsystem": "vhost_scsi", 00:16:55.710 "config": [] 00:16:55.710 }, 00:16:55.710 { 00:16:55.710 "subsystem": "vhost_blk", 00:16:55.710 "config": [] 00:16:55.710 }, 00:16:55.710 { 00:16:55.710 "subsystem": "ublk", 00:16:55.710 "config": [ 00:16:55.710 { 00:16:55.710 "method": "ublk_create_target", 00:16:55.710 "params": { 00:16:55.710 "cpumask": "1" 00:16:55.710 } 00:16:55.710 }, 00:16:55.710 { 00:16:55.710 "method": "ublk_start_disk", 00:16:55.710 "params": { 00:16:55.710 "bdev_name": "malloc0", 00:16:55.710 "ublk_id": 0, 00:16:55.710 "num_queues": 1, 00:16:55.710 "queue_depth": 128 00:16:55.710 } 00:16:55.710 } 00:16:55.710 ] 00:16:55.710 }, 00:16:55.710 { 00:16:55.710 "subsystem": "nbd", 00:16:55.710 "config": [] 00:16:55.710 }, 00:16:55.710 { 00:16:55.710 "subsystem": "nvmf", 00:16:55.710 "config": [ 00:16:55.710 { 00:16:55.710 "method": "nvmf_set_config", 00:16:55.710 "params": { 00:16:55.710 "discovery_filter": "match_any", 00:16:55.710 "admin_cmd_passthru": { 00:16:55.710 "identify_ctrlr": false 00:16:55.710 }, 00:16:55.710 "dhchap_digests": [ 00:16:55.710 "sha256", 00:16:55.710 "sha384", 00:16:55.710 "sha512" 00:16:55.710 ], 00:16:55.710 "dhchap_dhgroups": [ 00:16:55.710 "null", 00:16:55.710 "ffdhe2048", 00:16:55.710 "ffdhe3072", 00:16:55.710 "ffdhe4096", 00:16:55.710 "ffdhe6144", 00:16:55.710 "ffdhe8192" 00:16:55.710 ] 00:16:55.710 } 00:16:55.710 }, 00:16:55.710 { 00:16:55.710 "method": "nvmf_set_max_subsystems", 00:16:55.710 "params": { 00:16:55.710 "max_subsystems": 1024 00:16:55.710 } 00:16:55.710 }, 00:16:55.710 { 00:16:55.710 "method": "nvmf_set_crdt", 00:16:55.710 "params": { 00:16:55.710 "crdt1": 0, 00:16:55.710 "crdt2": 0, 00:16:55.710 "crdt3": 0 00:16:55.710 } 00:16:55.710 } 00:16:55.710 ] 00:16:55.710 }, 00:16:55.710 { 00:16:55.710 "subsystem": "iscsi", 00:16:55.710 "config": [ 00:16:55.710 { 00:16:55.710 "method": "iscsi_set_options", 00:16:55.710 "params": { 00:16:55.710 "node_base": "iqn.2016-06.io.spdk", 00:16:55.710 "max_sessions": 128, 00:16:55.710 "max_connections_per_session": 2, 00:16:55.710 "max_queue_depth": 64, 00:16:55.710 "default_time2wait": 2, 00:16:55.710 "default_time2retain": 20, 00:16:55.710 "first_burst_length": 8192, 00:16:55.710 "immediate_data": true, 00:16:55.710 "allow_duplicated_isid": false, 00:16:55.710 "error_recovery_level": 0, 00:16:55.710 "nop_timeout": 60, 00:16:55.710 "nop_in_interval": 30, 00:16:55.710 "disable_chap": false, 00:16:55.710 "require_chap": false, 00:16:55.710 "mutual_chap": false, 00:16:55.710 "chap_group": 0, 00:16:55.710 "max_large_datain_per_connection": 64, 00:16:55.710 "max_r2t_per_connection": 4, 00:16:55.710 "pdu_pool_size": 36864, 00:16:55.710 "immediate_data_pool_size": 16384, 00:16:55.710 "data_out_pool_size": 2048 00:16:55.710 } 00:16:55.710 } 00:16:55.710 ] 00:16:55.710 } 00:16:55.710 ] 00:16:55.710 }' 00:16:55.710 [2024-12-05 17:03:29.978357] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:55.710 [2024-12-05 17:03:29.978473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73538 ] 00:16:55.970 [2024-12-05 17:03:30.127208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.970 [2024-12-05 17:03:30.201896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.542 [2024-12-05 17:03:30.841984] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:56.542 [2024-12-05 17:03:30.842616] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:56.542 [2024-12-05 17:03:30.850051] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:56.542 [2024-12-05 17:03:30.850105] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:56.542 [2024-12-05 17:03:30.850112] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:56.542 [2024-12-05 17:03:30.850118] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:56.542 [2024-12-05 17:03:30.859016] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:56.542 [2024-12-05 17:03:30.859031] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:56.542 [2024-12-05 17:03:30.865026] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:56.542 [2024-12-05 17:03:30.865096] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:56.542 [2024-12-05 17:03:30.882969] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:56.802 17:03:30 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:56.802 17:03:30 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:56.802 17:03:30 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:56.802 17:03:30 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:56.802 17:03:30 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.802 17:03:30 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:56.802 17:03:30 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.802 17:03:30 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:56.802 17:03:30 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:56.802 17:03:30 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73538 00:16:56.802 17:03:30 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73538 ']' 00:16:56.802 17:03:30 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73538 00:16:56.802 17:03:30 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:56.802 17:03:30 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:56.802 17:03:30 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73538 00:16:56.802 17:03:30 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:56.802 killing process with pid 73538 00:16:56.802 17:03:30 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:56.802 17:03:30 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73538' 00:16:56.802 17:03:30 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73538 00:16:56.802 17:03:30 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73538 00:16:57.744 [2024-12-05 17:03:31.970738] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:57.744 [2024-12-05 17:03:32.009976] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:57.744 [2024-12-05 17:03:32.010076] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:57.744 [2024-12-05 17:03:32.020975] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:57.744 [2024-12-05 17:03:32.021015] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:57.744 [2024-12-05 17:03:32.021021] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:57.744 [2024-12-05 17:03:32.021040] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:57.744 [2024-12-05 17:03:32.021146] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:59.127 17:03:33 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:59.127 00:16:59.127 real 0m7.570s 00:16:59.127 user 0m5.065s 00:16:59.127 sys 0m3.098s 00:16:59.127 17:03:33 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.127 ************************************ 00:16:59.127 END TEST test_save_ublk_config 00:16:59.127 ************************************ 00:16:59.127 17:03:33 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:59.127 17:03:33 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73611 00:16:59.128 17:03:33 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:59.128 17:03:33 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73611 00:16:59.128 17:03:33 ublk -- common/autotest_common.sh@835 -- # '[' -z 73611 ']' 00:16:59.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.128 17:03:33 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.128 17:03:33 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:59.128 17:03:33 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:59.128 17:03:33 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.128 17:03:33 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:59.128 17:03:33 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.128 [2024-12-05 17:03:33.321399] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:16:59.128 [2024-12-05 17:03:33.321525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73611 ] 00:16:59.128 [2024-12-05 17:03:33.477260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:59.389 [2024-12-05 17:03:33.564878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.389 [2024-12-05 17:03:33.564900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.960 17:03:34 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.960 17:03:34 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:59.960 17:03:34 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:59.960 17:03:34 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:59.960 17:03:34 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:59.960 17:03:34 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.960 ************************************ 00:16:59.960 START TEST test_create_ublk 00:16:59.960 ************************************ 00:16:59.960 17:03:34 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:59.960 17:03:34 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:59.960 17:03:34 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.960 17:03:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:59.960 [2024-12-05 17:03:34.170969] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:59.960 [2024-12-05 17:03:34.172473] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:59.960 17:03:34 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.960 17:03:34 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:59.960 17:03:34 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:59.960 17:03:34 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.960 17:03:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.220 17:03:34 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.220 17:03:34 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:17:00.220 17:03:34 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:00.220 17:03:34 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.220 17:03:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.220 [2024-12-05 17:03:34.334067] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:00.220 [2024-12-05 17:03:34.334361] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:00.220 [2024-12-05 17:03:34.334375] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:00.221 [2024-12-05 17:03:34.334380] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:00.221 [2024-12-05 17:03:34.341984] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:00.221 [2024-12-05 17:03:34.342002] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:00.221 [2024-12-05 17:03:34.349973] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:00.221 [2024-12-05 17:03:34.350462] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:00.221 [2024-12-05 17:03:34.371983] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:00.221 17:03:34 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.221 17:03:34 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:17:00.221 17:03:34 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:17:00.221 17:03:34 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:17:00.221 17:03:34 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.221 17:03:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:00.221 17:03:34 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.221 17:03:34 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:17:00.221 { 00:17:00.221 "ublk_device": "/dev/ublkb0", 00:17:00.221 "id": 0, 00:17:00.221 "queue_depth": 512, 00:17:00.221 "num_queues": 4, 00:17:00.221 "bdev_name": "Malloc0" 00:17:00.221 } 00:17:00.221 ]' 00:17:00.221 17:03:34 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:17:00.221 17:03:34 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:00.221 17:03:34 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:17:00.221 17:03:34 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:17:00.221 17:03:34 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:17:00.221 17:03:34 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:17:00.221 17:03:34 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:17:00.221 17:03:34 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:17:00.221 17:03:34 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:17:00.221 17:03:34 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:00.221 17:03:34 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:17:00.221 17:03:34 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:17:00.221 17:03:34 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:17:00.221 17:03:34 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:17:00.221 17:03:34 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:17:00.221 17:03:34 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:17:00.221 17:03:34 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:17:00.221 17:03:34 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:17:00.221 17:03:34 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:17:00.221 17:03:34 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:00.221 17:03:34 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:17:00.221 17:03:34 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:17:00.482 fio: verification read phase will never start because write phase uses all of runtime 00:17:00.482 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:17:00.482 fio-3.35 00:17:00.482 Starting 1 process 00:17:10.474 00:17:10.474 fio_test: (groupid=0, jobs=1): err= 0: pid=73651: Thu Dec 5 17:03:44 2024 00:17:10.474 write: IOPS=15.2k, BW=59.3MiB/s (62.2MB/s)(593MiB/10001msec); 0 zone resets 00:17:10.474 clat (usec): min=34, max=5842, avg=65.13, stdev=118.08 00:17:10.474 lat (usec): min=35, max=5857, avg=65.54, stdev=118.09 00:17:10.474 clat percentiles (usec): 00:17:10.474 | 1.00th=[ 44], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 56], 00:17:10.474 | 30.00th=[ 59], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 63], 00:17:10.474 | 70.00th=[ 64], 80.00th=[ 66], 90.00th=[ 69], 95.00th=[ 72], 00:17:10.474 | 99.00th=[ 82], 99.50th=[ 90], 99.90th=[ 2573], 99.95th=[ 3458], 00:17:10.474 | 99.99th=[ 4015] 00:17:10.474 bw ( KiB/s): min=35144, max=75848, per=100.00%, avg=60824.84, stdev=7458.05, samples=19 00:17:10.474 iops : min= 8786, max=18962, avg=15206.21, stdev=1864.51, samples=19 00:17:10.474 lat (usec) : 50=12.13%, 100=87.48%, 250=0.18%, 500=0.02%, 750=0.01% 00:17:10.474 lat (usec) : 1000=0.01% 00:17:10.474 lat (msec) : 2=0.05%, 4=0.11%, 10=0.01% 00:17:10.474 cpu : usr=2.48%, sys=10.88%, ctx=151865, majf=0, minf=797 00:17:10.474 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:10.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:10.474 issued rwts: total=0,151865,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:10.474 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:10.474 00:17:10.474 Run status group 0 (all jobs): 00:17:10.474 WRITE: bw=59.3MiB/s (62.2MB/s), 59.3MiB/s-59.3MiB/s (62.2MB/s-62.2MB/s), io=593MiB (622MB), run=10001-10001msec 00:17:10.474 00:17:10.474 Disk stats (read/write): 00:17:10.474 ublkb0: ios=0/150293, merge=0/0, ticks=0/8654, in_queue=8654, util=99.10% 00:17:10.474 17:03:44 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:17:10.474 17:03:44 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.474 17:03:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:10.474 [2024-12-05 17:03:44.792277] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:10.474 [2024-12-05 17:03:44.840013] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:10.474 [2024-12-05 17:03:44.840704] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:10.733 [2024-12-05 17:03:44.849015] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:10.733 [2024-12-05 17:03:44.849268] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:10.733 [2024-12-05 17:03:44.849282] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:10.733 17:03:44 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.733 17:03:44 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:17:10.733 17:03:44 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:17:10.733 17:03:44 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:17:10.733 17:03:44 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:10.733 17:03:44 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:10.733 17:03:44 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:10.733 17:03:44 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:10.733 17:03:44 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:17:10.733 17:03:44 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.733 17:03:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:10.733 [2024-12-05 17:03:44.864042] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:17:10.733 request: 00:17:10.733 { 00:17:10.733 "ublk_id": 0, 00:17:10.733 "method": "ublk_stop_disk", 00:17:10.733 "req_id": 1 00:17:10.733 } 00:17:10.733 Got JSON-RPC error response 00:17:10.733 response: 00:17:10.733 { 00:17:10.733 "code": -19, 00:17:10.733 "message": "No such device" 00:17:10.733 } 00:17:10.733 17:03:44 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:10.733 17:03:44 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:17:10.733 17:03:44 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:10.733 17:03:44 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:10.733 17:03:44 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:10.733 17:03:44 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:17:10.733 17:03:44 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.733 17:03:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:10.733 [2024-12-05 17:03:44.880032] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:10.733 [2024-12-05 17:03:44.887969] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:10.733 [2024-12-05 17:03:44.888003] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:10.733 17:03:44 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.733 17:03:44 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:10.733 17:03:44 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.733 17:03:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:10.991 17:03:45 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.991 17:03:45 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:17:10.991 17:03:45 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:10.991 17:03:45 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.991 17:03:45 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:10.991 17:03:45 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.991 17:03:45 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:10.991 17:03:45 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:17:10.991 17:03:45 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:10.991 17:03:45 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:10.991 17:03:45 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.991 17:03:45 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:10.991 17:03:45 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.991 17:03:45 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:10.991 17:03:45 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:17:10.991 17:03:45 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:10.991 00:17:10.991 real 0m11.175s 00:17:10.991 user 0m0.552s 00:17:10.991 sys 0m1.167s 00:17:10.991 17:03:45 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:10.991 17:03:45 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:10.991 ************************************ 00:17:10.991 END TEST test_create_ublk 00:17:10.991 ************************************ 00:17:11.248 17:03:45 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:17:11.248 17:03:45 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:11.248 17:03:45 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:11.248 17:03:45 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.248 ************************************ 00:17:11.248 START TEST test_create_multi_ublk 00:17:11.248 ************************************ 00:17:11.248 17:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:17:11.248 17:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:17:11.248 17:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.248 17:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.248 [2024-12-05 17:03:45.390960] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:11.248 [2024-12-05 17:03:45.392432] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:11.248 17:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.248 17:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:17:11.248 17:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:17:11.248 17:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:11.249 17:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:17:11.249 17:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.249 17:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.249 17:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.249 17:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:17:11.249 17:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:17:11.249 17:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.249 17:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.249 [2024-12-05 17:03:45.607075] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:17:11.249 [2024-12-05 17:03:45.607373] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:17:11.249 [2024-12-05 17:03:45.607384] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:17:11.249 [2024-12-05 17:03:45.607393] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:17:11.506 [2024-12-05 17:03:45.619030] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:11.506 [2024-12-05 17:03:45.619052] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:11.506 [2024-12-05 17:03:45.630974] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:11.506 [2024-12-05 17:03:45.631473] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:17:11.506 [2024-12-05 17:03:45.656970] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:17:11.506 17:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.506 17:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:17:11.506 17:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:11.506 17:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:17:11.506 17:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.506 17:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.763 17:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.763 17:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:17:11.763 17:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:17:11.763 17:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.763 17:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.763 [2024-12-05 17:03:45.881063] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:17:11.763 [2024-12-05 17:03:45.881357] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:17:11.763 [2024-12-05 17:03:45.881370] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:11.763 [2024-12-05 17:03:45.881376] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:11.763 [2024-12-05 17:03:45.890143] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:11.763 [2024-12-05 17:03:45.890159] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:11.763 [2024-12-05 17:03:45.896981] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:11.764 [2024-12-05 17:03:45.897467] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:11.764 [2024-12-05 17:03:45.905974] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:11.764 17:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.764 17:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:17:11.764 17:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:11.764 17:03:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:17:11.764 17:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.764 17:03:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.764 17:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.764 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:17:11.764 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:17:11.764 17:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.764 17:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:11.764 [2024-12-05 17:03:46.065059] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:17:11.764 [2024-12-05 17:03:46.065363] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:17:11.764 [2024-12-05 17:03:46.065375] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:17:11.764 [2024-12-05 17:03:46.065382] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:17:11.764 [2024-12-05 17:03:46.072977] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:11.764 [2024-12-05 17:03:46.072995] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:11.764 [2024-12-05 17:03:46.080970] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:11.764 [2024-12-05 17:03:46.081450] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:17:11.764 [2024-12-05 17:03:46.085661] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:17:11.764 17:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.764 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:17:11.764 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:11.764 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:17:11.764 17:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.764 17:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:12.021 17:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.021 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:17:12.021 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:17:12.021 17:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.021 17:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:12.021 [2024-12-05 17:03:46.245079] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:17:12.021 [2024-12-05 17:03:46.245368] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:17:12.021 [2024-12-05 17:03:46.245376] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:17:12.021 [2024-12-05 17:03:46.245381] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:17:12.021 [2024-12-05 17:03:46.252990] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:12.021 [2024-12-05 17:03:46.253004] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:12.021 [2024-12-05 17:03:46.260992] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:12.021 [2024-12-05 17:03:46.261474] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:17:12.021 [2024-12-05 17:03:46.272991] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:17:12.021 17:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.021 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:17:12.021 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:17:12.021 17:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.021 17:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:12.021 17:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.021 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:17:12.021 { 00:17:12.021 "ublk_device": "/dev/ublkb0", 00:17:12.021 "id": 0, 00:17:12.021 "queue_depth": 512, 00:17:12.021 "num_queues": 4, 00:17:12.021 "bdev_name": "Malloc0" 00:17:12.021 }, 00:17:12.021 { 00:17:12.021 "ublk_device": "/dev/ublkb1", 00:17:12.021 "id": 1, 00:17:12.021 "queue_depth": 512, 00:17:12.021 "num_queues": 4, 00:17:12.021 "bdev_name": "Malloc1" 00:17:12.021 }, 00:17:12.021 { 00:17:12.021 "ublk_device": "/dev/ublkb2", 00:17:12.021 "id": 2, 00:17:12.021 "queue_depth": 512, 00:17:12.021 "num_queues": 4, 00:17:12.021 "bdev_name": "Malloc2" 00:17:12.021 }, 00:17:12.021 { 00:17:12.021 "ublk_device": "/dev/ublkb3", 00:17:12.021 "id": 3, 00:17:12.021 "queue_depth": 512, 00:17:12.021 "num_queues": 4, 00:17:12.021 "bdev_name": "Malloc3" 00:17:12.021 } 00:17:12.021 ]' 00:17:12.021 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:17:12.021 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:12.021 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:17:12.021 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:17:12.021 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:17:12.021 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:17:12.021 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:17:12.021 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:12.021 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:17:12.279 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:12.279 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:17:12.279 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:17:12.279 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:12.279 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:17:12.279 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:17:12.279 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:17:12.279 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:17:12.279 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:17:12.279 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:12.279 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:17:12.279 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:12.279 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:17:12.279 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:17:12.279 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:12.279 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:17:12.279 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:17:12.279 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:17:12.536 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:17:12.536 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:17:12.536 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:12.536 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:17:12.536 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:12.536 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:17:12.536 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:17:12.536 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:12.536 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:17:12.536 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:17:12.536 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:17:12.536 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:17:12.536 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:17:12.536 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:17:12.536 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:17:12.536 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:17:12.536 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:17:12.794 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:17:12.794 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:17:12.794 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:17:12.794 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:12.794 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:17:12.794 17:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.794 17:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:12.794 [2024-12-05 17:03:46.938040] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:17:12.794 [2024-12-05 17:03:46.978512] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:12.794 [2024-12-05 17:03:46.979648] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:17:12.794 [2024-12-05 17:03:46.985989] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:12.794 [2024-12-05 17:03:46.986245] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:17:12.794 [2024-12-05 17:03:46.986259] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:17:12.794 17:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.794 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:12.794 17:03:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:17:12.794 17:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.794 17:03:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:12.794 [2024-12-05 17:03:47.000043] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:12.794 [2024-12-05 17:03:47.038493] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:12.794 [2024-12-05 17:03:47.039589] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:12.794 [2024-12-05 17:03:47.044977] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:12.794 [2024-12-05 17:03:47.045222] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:12.794 [2024-12-05 17:03:47.045235] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:12.794 17:03:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.794 17:03:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:12.794 17:03:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:17:12.794 17:03:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.794 17:03:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:12.794 [2024-12-05 17:03:47.060043] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:17:12.794 [2024-12-05 17:03:47.095976] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:12.794 [2024-12-05 17:03:47.096795] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:17:12.794 [2024-12-05 17:03:47.105003] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:12.794 [2024-12-05 17:03:47.105236] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:17:12.794 [2024-12-05 17:03:47.105249] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:17:12.794 17:03:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.794 17:03:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:12.794 17:03:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:17:12.794 17:03:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.794 17:03:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:12.794 [2024-12-05 17:03:47.120039] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:17:12.794 [2024-12-05 17:03:47.152472] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:12.794 [2024-12-05 17:03:47.153397] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:17:12.794 [2024-12-05 17:03:47.159972] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:12.794 [2024-12-05 17:03:47.160199] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:17:12.794 [2024-12-05 17:03:47.160211] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:17:13.052 17:03:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.052 17:03:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:17:13.052 [2024-12-05 17:03:47.352019] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:13.052 [2024-12-05 17:03:47.359966] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:13.052 [2024-12-05 17:03:47.359990] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:13.052 17:03:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:17:13.052 17:03:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.052 17:03:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:13.052 17:03:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.052 17:03:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.617 17:03:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.617 17:03:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.617 17:03:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:13.617 17:03:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.617 17:03:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:13.876 17:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.876 17:03:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:13.876 17:03:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:13.876 17:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.876 17:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:14.135 17:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.135 17:03:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:17:14.135 17:03:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:17:14.135 17:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.135 17:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:14.701 17:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.701 17:03:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:17:14.701 17:03:48 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:14.701 17:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.701 17:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:14.701 17:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.701 17:03:48 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:17:14.701 17:03:48 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:17:14.701 17:03:48 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:17:14.701 17:03:48 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:17:14.701 17:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.701 17:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:14.701 17:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.701 17:03:48 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:17:14.701 17:03:48 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:17:14.701 17:03:48 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:17:14.701 00:17:14.701 real 0m3.494s 00:17:14.701 user 0m0.805s 00:17:14.701 sys 0m0.151s 00:17:14.701 17:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:14.701 17:03:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:17:14.701 ************************************ 00:17:14.701 END TEST test_create_multi_ublk 00:17:14.701 ************************************ 00:17:14.701 17:03:48 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:17:14.701 17:03:48 ublk -- ublk/ublk.sh@147 -- # cleanup 00:17:14.701 17:03:48 ublk -- ublk/ublk.sh@130 -- # killprocess 73611 00:17:14.701 17:03:48 ublk -- common/autotest_common.sh@954 -- # '[' -z 73611 ']' 00:17:14.701 17:03:48 ublk -- common/autotest_common.sh@958 -- # kill -0 73611 00:17:14.701 17:03:48 ublk -- common/autotest_common.sh@959 -- # uname 00:17:14.701 17:03:48 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:14.701 17:03:48 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73611 00:17:14.701 17:03:48 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:14.701 17:03:48 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:14.701 17:03:48 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73611' 00:17:14.701 killing process with pid 73611 00:17:14.701 17:03:48 ublk -- common/autotest_common.sh@973 -- # kill 73611 00:17:14.701 17:03:48 ublk -- common/autotest_common.sh@978 -- # wait 73611 00:17:15.268 [2024-12-05 17:03:49.479941] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:15.268 [2024-12-05 17:03:49.479991] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:15.837 00:17:15.837 real 0m24.678s 00:17:15.837 user 0m35.361s 00:17:15.837 sys 0m9.167s 00:17:15.837 17:03:50 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:15.837 17:03:50 ublk -- common/autotest_common.sh@10 -- # set +x 00:17:15.837 ************************************ 00:17:15.837 END TEST ublk 00:17:15.837 ************************************ 00:17:15.837 17:03:50 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:15.837 17:03:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:15.837 17:03:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:15.837 17:03:50 -- common/autotest_common.sh@10 -- # set +x 00:17:15.837 ************************************ 00:17:15.837 START TEST ublk_recovery 00:17:15.837 ************************************ 00:17:15.837 17:03:50 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:17:16.097 * Looking for test storage... 00:17:16.098 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:17:16.098 17:03:50 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:16.098 17:03:50 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:16.098 17:03:50 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:17:16.098 17:03:50 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:16.098 17:03:50 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:17:16.098 17:03:50 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:16.098 17:03:50 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:16.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.098 --rc genhtml_branch_coverage=1 00:17:16.098 --rc genhtml_function_coverage=1 00:17:16.098 --rc genhtml_legend=1 00:17:16.098 --rc geninfo_all_blocks=1 00:17:16.098 --rc geninfo_unexecuted_blocks=1 00:17:16.098 00:17:16.098 ' 00:17:16.098 17:03:50 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:16.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.098 --rc genhtml_branch_coverage=1 00:17:16.098 --rc genhtml_function_coverage=1 00:17:16.098 --rc genhtml_legend=1 00:17:16.098 --rc geninfo_all_blocks=1 00:17:16.098 --rc geninfo_unexecuted_blocks=1 00:17:16.098 00:17:16.098 ' 00:17:16.098 17:03:50 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:16.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.098 --rc genhtml_branch_coverage=1 00:17:16.098 --rc genhtml_function_coverage=1 00:17:16.098 --rc genhtml_legend=1 00:17:16.098 --rc geninfo_all_blocks=1 00:17:16.098 --rc geninfo_unexecuted_blocks=1 00:17:16.098 00:17:16.098 ' 00:17:16.098 17:03:50 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:16.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:16.098 --rc genhtml_branch_coverage=1 00:17:16.098 --rc genhtml_function_coverage=1 00:17:16.098 --rc genhtml_legend=1 00:17:16.098 --rc geninfo_all_blocks=1 00:17:16.098 --rc geninfo_unexecuted_blocks=1 00:17:16.098 00:17:16.098 ' 00:17:16.098 17:03:50 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:17:16.098 17:03:50 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:17:16.098 17:03:50 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:17:16.098 17:03:50 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:17:16.098 17:03:50 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:17:16.098 17:03:50 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:17:16.098 17:03:50 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:17:16.098 17:03:50 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:17:16.098 17:03:50 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:17:16.098 17:03:50 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:17:16.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.098 17:03:50 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=73996 00:17:16.098 17:03:50 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:16.098 17:03:50 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 73996 00:17:16.098 17:03:50 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 73996 ']' 00:17:16.098 17:03:50 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:16.098 17:03:50 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.098 17:03:50 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:16.098 17:03:50 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.098 17:03:50 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:16.098 17:03:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:16.098 [2024-12-05 17:03:50.389207] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:17:16.098 [2024-12-05 17:03:50.389326] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73996 ] 00:17:16.359 [2024-12-05 17:03:50.549932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:16.359 [2024-12-05 17:03:50.665423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.359 [2024-12-05 17:03:50.665606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.304 17:03:51 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:17.304 17:03:51 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:17.304 17:03:51 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:17:17.304 17:03:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.304 17:03:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.304 [2024-12-05 17:03:51.350979] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:17.304 [2024-12-05 17:03:51.353306] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:17.304 17:03:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.304 17:03:51 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:17.304 17:03:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.304 17:03:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.304 malloc0 00:17:17.304 17:03:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.304 17:03:51 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:17:17.304 17:03:51 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.304 17:03:51 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:17.304 [2024-12-05 17:03:51.469159] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:17:17.304 [2024-12-05 17:03:51.469273] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:17:17.304 [2024-12-05 17:03:51.469287] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:17.304 [2024-12-05 17:03:51.469298] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:17:17.304 [2024-12-05 17:03:51.477013] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:17:17.304 [2024-12-05 17:03:51.477043] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:17:17.304 [2024-12-05 17:03:51.485010] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:17:17.304 [2024-12-05 17:03:51.485185] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:17:17.304 [2024-12-05 17:03:51.509005] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:17:17.304 1 00:17:17.304 17:03:51 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.304 17:03:51 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:17:18.246 17:03:52 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74031 00:17:18.246 17:03:52 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:17:18.246 17:03:52 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:17:18.505 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:18.505 fio-3.35 00:17:18.505 Starting 1 process 00:17:23.765 17:03:57 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 73996 00:17:23.765 17:03:57 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:17:29.071 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 73996 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:17:29.071 17:04:02 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74142 00:17:29.072 17:04:02 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:29.072 17:04:02 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74142 00:17:29.072 17:04:02 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74142 ']' 00:17:29.072 17:04:02 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.072 17:04:02 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:29.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.072 17:04:02 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.072 17:04:02 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:29.072 17:04:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.072 17:04:02 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:17:29.072 [2024-12-05 17:04:02.618083] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:17:29.072 [2024-12-05 17:04:02.618524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74142 ] 00:17:29.072 [2024-12-05 17:04:02.775289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:29.072 [2024-12-05 17:04:02.857059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.072 [2024-12-05 17:04:02.857157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.398 17:04:03 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:29.398 17:04:03 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:17:29.398 17:04:03 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:17:29.398 17:04:03 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.398 17:04:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.398 [2024-12-05 17:04:03.445970] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:17:29.398 [2024-12-05 17:04:03.447471] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:17:29.398 17:04:03 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.398 17:04:03 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:17:29.398 17:04:03 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.398 17:04:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.398 malloc0 00:17:29.398 17:04:03 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.398 17:04:03 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:17:29.398 17:04:03 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.398 17:04:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:29.398 [2024-12-05 17:04:03.534067] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:17:29.398 [2024-12-05 17:04:03.534098] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:17:29.398 [2024-12-05 17:04:03.534106] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:29.398 [2024-12-05 17:04:03.542006] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:29.398 [2024-12-05 17:04:03.542026] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:29.398 1 00:17:29.398 17:04:03 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.398 17:04:03 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74031 00:17:30.348 [2024-12-05 17:04:04.542059] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:30.348 [2024-12-05 17:04:04.548977] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:30.349 [2024-12-05 17:04:04.548993] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:31.302 [2024-12-05 17:04:05.549022] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:31.302 [2024-12-05 17:04:05.552971] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:31.302 [2024-12-05 17:04:05.552985] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:32.237 [2024-12-05 17:04:06.553015] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:17:32.237 [2024-12-05 17:04:06.560974] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:17:32.237 [2024-12-05 17:04:06.560989] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:17:32.237 [2024-12-05 17:04:06.560996] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:17:32.237 [2024-12-05 17:04:06.561065] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:54.160 [2024-12-05 17:04:27.595981] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:54.160 [2024-12-05 17:04:27.601589] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:54.160 [2024-12-05 17:04:27.608164] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:54.160 [2024-12-05 17:04:27.608183] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:18:20.717 00:18:20.717 fio_test: (groupid=0, jobs=1): err= 0: pid=74040: Thu Dec 5 17:04:52 2024 00:18:20.717 read: IOPS=14.3k, BW=56.0MiB/s (58.7MB/s)(3361MiB/60002msec) 00:18:20.717 slat (nsec): min=921, max=295926, avg=5007.76, stdev=1617.25 00:18:20.717 clat (usec): min=824, max=30089k, avg=4316.44, stdev=253319.51 00:18:20.717 lat (usec): min=829, max=30089k, avg=4321.45, stdev=253319.51 00:18:20.717 clat percentiles (usec): 00:18:20.717 | 1.00th=[ 1762], 5.00th=[ 1876], 10.00th=[ 1909], 20.00th=[ 1942], 00:18:20.717 | 30.00th=[ 1958], 40.00th=[ 1975], 50.00th=[ 1991], 60.00th=[ 2008], 00:18:20.717 | 70.00th=[ 2024], 80.00th=[ 2114], 90.00th=[ 2507], 95.00th=[ 3228], 00:18:20.717 | 99.00th=[ 5473], 99.50th=[ 5866], 99.90th=[ 8160], 99.95th=[12387], 00:18:20.717 | 99.99th=[13304] 00:18:20.717 bw ( KiB/s): min=18632, max=124304, per=100.00%, avg=112854.53, stdev=20227.90, samples=60 00:18:20.717 iops : min= 4658, max=31076, avg=28213.63, stdev=5056.98, samples=60 00:18:20.717 write: IOPS=14.3k, BW=55.9MiB/s (58.7MB/s)(3356MiB/60002msec); 0 zone resets 00:18:20.717 slat (nsec): min=1033, max=247848, avg=5042.73, stdev=1617.99 00:18:20.717 clat (usec): min=623, max=30089k, avg=4605.47, stdev=265669.00 00:18:20.717 lat (usec): min=628, max=30089k, avg=4610.51, stdev=265669.00 00:18:20.717 clat percentiles (usec): 00:18:20.717 | 1.00th=[ 1811], 5.00th=[ 1975], 10.00th=[ 2008], 20.00th=[ 2024], 00:18:20.717 | 30.00th=[ 2040], 40.00th=[ 2057], 50.00th=[ 2073], 60.00th=[ 2089], 00:18:20.717 | 70.00th=[ 2114], 80.00th=[ 2180], 90.00th=[ 2606], 95.00th=[ 3163], 00:18:20.717 | 99.00th=[ 5538], 99.50th=[ 5932], 99.90th=[ 8291], 99.95th=[12518], 00:18:20.717 | 99.99th=[13304] 00:18:20.717 bw ( KiB/s): min=18024, max=124984, per=100.00%, avg=112714.40, stdev=20429.85, samples=60 00:18:20.717 iops : min= 4506, max=31246, avg=28178.60, stdev=5107.46, samples=60 00:18:20.717 lat (usec) : 750=0.01%, 1000=0.01% 00:18:20.717 lat (msec) : 2=33.65%, 4=63.10%, 10=3.18%, 20=0.06%, >=2000=0.01% 00:18:20.717 cpu : usr=3.36%, sys=14.74%, ctx=57674, majf=0, minf=13 00:18:20.717 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:18:20.717 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.717 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:20.717 issued rwts: total=860359,859175,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.717 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:20.717 00:18:20.717 Run status group 0 (all jobs): 00:18:20.717 READ: bw=56.0MiB/s (58.7MB/s), 56.0MiB/s-56.0MiB/s (58.7MB/s-58.7MB/s), io=3361MiB (3524MB), run=60002-60002msec 00:18:20.717 WRITE: bw=55.9MiB/s (58.7MB/s), 55.9MiB/s-55.9MiB/s (58.7MB/s-58.7MB/s), io=3356MiB (3519MB), run=60002-60002msec 00:18:20.717 00:18:20.717 Disk stats (read/write): 00:18:20.717 ublkb1: ios=857076/855900, merge=0/0, ticks=3655237/3828450, in_queue=7483688, util=99.91% 00:18:20.717 17:04:52 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:18:20.717 17:04:52 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.717 17:04:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.717 [2024-12-05 17:04:52.788746] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:20.717 [2024-12-05 17:04:52.835993] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:20.717 [2024-12-05 17:04:52.836149] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:20.717 [2024-12-05 17:04:52.843976] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:20.717 [2024-12-05 17:04:52.844065] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:20.717 [2024-12-05 17:04:52.844071] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:20.717 17:04:52 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.717 17:04:52 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:18:20.717 17:04:52 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.717 17:04:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.717 [2024-12-05 17:04:52.860039] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:20.718 [2024-12-05 17:04:52.867966] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:20.718 [2024-12-05 17:04:52.867994] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:20.718 17:04:52 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.718 17:04:52 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:18:20.718 17:04:52 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:18:20.718 17:04:52 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74142 00:18:20.718 17:04:52 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74142 ']' 00:18:20.718 17:04:52 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74142 00:18:20.718 17:04:52 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:18:20.718 17:04:52 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.718 17:04:52 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74142 00:18:20.718 17:04:52 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:20.718 17:04:52 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:20.718 killing process with pid 74142 00:18:20.718 17:04:52 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74142' 00:18:20.718 17:04:52 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74142 00:18:20.718 17:04:52 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74142 00:18:20.718 [2024-12-05 17:04:53.918237] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:20.718 [2024-12-05 17:04:53.918286] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:20.718 00:18:20.718 real 1m4.455s 00:18:20.718 user 1m47.133s 00:18:20.718 sys 0m21.919s 00:18:20.718 17:04:54 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:20.718 ************************************ 00:18:20.718 17:04:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:20.718 END TEST ublk_recovery 00:18:20.718 ************************************ 00:18:20.718 17:04:54 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:18:20.718 17:04:54 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:18:20.718 17:04:54 -- spdk/autotest.sh@260 -- # timing_exit lib 00:18:20.718 17:04:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:20.718 17:04:54 -- common/autotest_common.sh@10 -- # set +x 00:18:20.718 17:04:54 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:18:20.718 17:04:54 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:18:20.718 17:04:54 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:18:20.718 17:04:54 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:20.718 17:04:54 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:20.718 17:04:54 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:18:20.718 17:04:54 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:18:20.718 17:04:54 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:18:20.718 17:04:54 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:20.718 17:04:54 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:18:20.718 17:04:54 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:20.718 17:04:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:20.718 17:04:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:20.718 17:04:54 -- common/autotest_common.sh@10 -- # set +x 00:18:20.718 ************************************ 00:18:20.718 START TEST ftl 00:18:20.718 ************************************ 00:18:20.718 17:04:54 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:20.718 * Looking for test storage... 00:18:20.718 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:20.718 17:04:54 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:20.718 17:04:54 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:20.718 17:04:54 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:18:20.718 17:04:54 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:20.718 17:04:54 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:20.718 17:04:54 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:20.718 17:04:54 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:20.718 17:04:54 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:18:20.718 17:04:54 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:18:20.718 17:04:54 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:18:20.718 17:04:54 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:18:20.718 17:04:54 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:18:20.718 17:04:54 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:18:20.718 17:04:54 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:18:20.718 17:04:54 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:20.718 17:04:54 ftl -- scripts/common.sh@344 -- # case "$op" in 00:18:20.718 17:04:54 ftl -- scripts/common.sh@345 -- # : 1 00:18:20.718 17:04:54 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:20.718 17:04:54 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:20.718 17:04:54 ftl -- scripts/common.sh@365 -- # decimal 1 00:18:20.718 17:04:54 ftl -- scripts/common.sh@353 -- # local d=1 00:18:20.718 17:04:54 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:20.718 17:04:54 ftl -- scripts/common.sh@355 -- # echo 1 00:18:20.718 17:04:54 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:18:20.718 17:04:54 ftl -- scripts/common.sh@366 -- # decimal 2 00:18:20.718 17:04:54 ftl -- scripts/common.sh@353 -- # local d=2 00:18:20.718 17:04:54 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:20.718 17:04:54 ftl -- scripts/common.sh@355 -- # echo 2 00:18:20.718 17:04:54 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:18:20.718 17:04:54 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:20.718 17:04:54 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:20.718 17:04:54 ftl -- scripts/common.sh@368 -- # return 0 00:18:20.718 17:04:54 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:20.718 17:04:54 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:20.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.718 --rc genhtml_branch_coverage=1 00:18:20.718 --rc genhtml_function_coverage=1 00:18:20.718 --rc genhtml_legend=1 00:18:20.718 --rc geninfo_all_blocks=1 00:18:20.718 --rc geninfo_unexecuted_blocks=1 00:18:20.718 00:18:20.718 ' 00:18:20.718 17:04:54 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:20.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.718 --rc genhtml_branch_coverage=1 00:18:20.718 --rc genhtml_function_coverage=1 00:18:20.718 --rc genhtml_legend=1 00:18:20.718 --rc geninfo_all_blocks=1 00:18:20.718 --rc geninfo_unexecuted_blocks=1 00:18:20.718 00:18:20.718 ' 00:18:20.718 17:04:54 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:20.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.718 --rc genhtml_branch_coverage=1 00:18:20.718 --rc genhtml_function_coverage=1 00:18:20.718 --rc genhtml_legend=1 00:18:20.718 --rc geninfo_all_blocks=1 00:18:20.718 --rc geninfo_unexecuted_blocks=1 00:18:20.718 00:18:20.718 ' 00:18:20.718 17:04:54 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:20.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:20.718 --rc genhtml_branch_coverage=1 00:18:20.718 --rc genhtml_function_coverage=1 00:18:20.718 --rc genhtml_legend=1 00:18:20.718 --rc geninfo_all_blocks=1 00:18:20.718 --rc geninfo_unexecuted_blocks=1 00:18:20.718 00:18:20.718 ' 00:18:20.718 17:04:54 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:20.718 17:04:54 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:18:20.718 17:04:54 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:20.718 17:04:54 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:20.718 17:04:54 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:20.718 17:04:54 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:20.718 17:04:54 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:20.718 17:04:54 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:20.718 17:04:54 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:20.718 17:04:54 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:20.718 17:04:54 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:20.718 17:04:54 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:20.718 17:04:54 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:20.718 17:04:54 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:20.718 17:04:54 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:20.718 17:04:54 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:20.718 17:04:54 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:20.718 17:04:54 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:20.718 17:04:54 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:20.718 17:04:54 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:20.718 17:04:54 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:20.718 17:04:54 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:20.718 17:04:54 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:20.718 17:04:54 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:20.718 17:04:54 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:20.718 17:04:54 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:20.718 17:04:54 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:20.718 17:04:54 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:20.718 17:04:54 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:20.718 17:04:54 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:20.718 17:04:54 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:18:20.718 17:04:54 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:18:20.718 17:04:54 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:18:20.718 17:04:54 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:18:20.718 17:04:54 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:20.977 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:21.236 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:21.236 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:21.236 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:21.236 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:18:21.236 17:04:55 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=74947 00:18:21.236 17:04:55 ftl -- ftl/ftl.sh@38 -- # waitforlisten 74947 00:18:21.236 17:04:55 ftl -- common/autotest_common.sh@835 -- # '[' -z 74947 ']' 00:18:21.236 17:04:55 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:18:21.236 17:04:55 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.236 17:04:55 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:21.236 17:04:55 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.236 17:04:55 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:21.236 17:04:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:21.236 [2024-12-05 17:04:55.461259] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:18:21.236 [2024-12-05 17:04:55.461380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74947 ] 00:18:21.494 [2024-12-05 17:04:55.620025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.494 [2024-12-05 17:04:55.716883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.065 17:04:56 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:22.065 17:04:56 ftl -- common/autotest_common.sh@868 -- # return 0 00:18:22.065 17:04:56 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:18:22.326 17:04:56 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:18:23.268 17:04:57 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:18:23.268 17:04:57 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:23.528 17:04:57 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:18:23.528 17:04:57 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:23.528 17:04:57 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:23.789 17:04:57 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:18:23.790 17:04:57 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:18:23.790 17:04:57 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:18:23.790 17:04:57 ftl -- ftl/ftl.sh@50 -- # break 00:18:23.790 17:04:57 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:18:23.790 17:04:57 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:18:23.790 17:04:57 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:18:23.790 17:04:57 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:18:24.051 17:04:58 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:18:24.051 17:04:58 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:18:24.051 17:04:58 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:18:24.051 17:04:58 ftl -- ftl/ftl.sh@63 -- # break 00:18:24.051 17:04:58 ftl -- ftl/ftl.sh@66 -- # killprocess 74947 00:18:24.051 17:04:58 ftl -- common/autotest_common.sh@954 -- # '[' -z 74947 ']' 00:18:24.051 17:04:58 ftl -- common/autotest_common.sh@958 -- # kill -0 74947 00:18:24.051 17:04:58 ftl -- common/autotest_common.sh@959 -- # uname 00:18:24.051 17:04:58 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.051 17:04:58 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74947 00:18:24.051 17:04:58 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:24.051 17:04:58 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:24.051 killing process with pid 74947 00:18:24.051 17:04:58 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74947' 00:18:24.051 17:04:58 ftl -- common/autotest_common.sh@973 -- # kill 74947 00:18:24.051 17:04:58 ftl -- common/autotest_common.sh@978 -- # wait 74947 00:18:25.437 17:04:59 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:18:25.437 17:04:59 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:25.437 17:04:59 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:25.437 17:04:59 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.437 17:04:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:25.437 ************************************ 00:18:25.437 START TEST ftl_fio_basic 00:18:25.437 ************************************ 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:18:25.437 * Looking for test storage... 00:18:25.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:25.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.437 --rc genhtml_branch_coverage=1 00:18:25.437 --rc genhtml_function_coverage=1 00:18:25.437 --rc genhtml_legend=1 00:18:25.437 --rc geninfo_all_blocks=1 00:18:25.437 --rc geninfo_unexecuted_blocks=1 00:18:25.437 00:18:25.437 ' 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:25.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.437 --rc genhtml_branch_coverage=1 00:18:25.437 --rc genhtml_function_coverage=1 00:18:25.437 --rc genhtml_legend=1 00:18:25.437 --rc geninfo_all_blocks=1 00:18:25.437 --rc geninfo_unexecuted_blocks=1 00:18:25.437 00:18:25.437 ' 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:25.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.437 --rc genhtml_branch_coverage=1 00:18:25.437 --rc genhtml_function_coverage=1 00:18:25.437 --rc genhtml_legend=1 00:18:25.437 --rc geninfo_all_blocks=1 00:18:25.437 --rc geninfo_unexecuted_blocks=1 00:18:25.437 00:18:25.437 ' 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:25.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.437 --rc genhtml_branch_coverage=1 00:18:25.437 --rc genhtml_function_coverage=1 00:18:25.437 --rc genhtml_legend=1 00:18:25.437 --rc geninfo_all_blocks=1 00:18:25.437 --rc geninfo_unexecuted_blocks=1 00:18:25.437 00:18:25.437 ' 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:18:25.437 17:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:18:25.438 17:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:18:25.438 17:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:25.438 17:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:25.438 17:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:25.438 17:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75079 00:18:25.438 17:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75079 00:18:25.438 17:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75079 ']' 00:18:25.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.438 17:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.438 17:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.438 17:04:59 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:18:25.438 17:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.438 17:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.438 17:04:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:25.438 [2024-12-05 17:04:59.761060] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:18:25.438 [2024-12-05 17:04:59.761182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75079 ] 00:18:25.699 [2024-12-05 17:04:59.918768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:25.699 [2024-12-05 17:04:59.997861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.699 [2024-12-05 17:04:59.998199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.699 [2024-12-05 17:04:59.998224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:26.270 17:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.270 17:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:18:26.270 17:05:00 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:26.270 17:05:00 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:18:26.270 17:05:00 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:26.270 17:05:00 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:18:26.270 17:05:00 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:18:26.270 17:05:00 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:26.531 17:05:00 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:26.532 17:05:00 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:18:26.532 17:05:00 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:26.532 17:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:18:26.532 17:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:26.532 17:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:26.532 17:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:26.532 17:05:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:26.793 17:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:26.793 { 00:18:26.793 "name": "nvme0n1", 00:18:26.793 "aliases": [ 00:18:26.793 "fc4e8a8d-6925-4022-ada2-da81e3bd7607" 00:18:26.793 ], 00:18:26.793 "product_name": "NVMe disk", 00:18:26.793 "block_size": 4096, 00:18:26.793 "num_blocks": 1310720, 00:18:26.793 "uuid": "fc4e8a8d-6925-4022-ada2-da81e3bd7607", 00:18:26.793 "numa_id": -1, 00:18:26.793 "assigned_rate_limits": { 00:18:26.793 "rw_ios_per_sec": 0, 00:18:26.793 "rw_mbytes_per_sec": 0, 00:18:26.793 "r_mbytes_per_sec": 0, 00:18:26.793 "w_mbytes_per_sec": 0 00:18:26.793 }, 00:18:26.793 "claimed": false, 00:18:26.793 "zoned": false, 00:18:26.793 "supported_io_types": { 00:18:26.793 "read": true, 00:18:26.793 "write": true, 00:18:26.793 "unmap": true, 00:18:26.793 "flush": true, 00:18:26.793 "reset": true, 00:18:26.793 "nvme_admin": true, 00:18:26.793 "nvme_io": true, 00:18:26.793 "nvme_io_md": false, 00:18:26.793 "write_zeroes": true, 00:18:26.793 "zcopy": false, 00:18:26.793 "get_zone_info": false, 00:18:26.793 "zone_management": false, 00:18:26.793 "zone_append": false, 00:18:26.794 "compare": true, 00:18:26.794 "compare_and_write": false, 00:18:26.794 "abort": true, 00:18:26.794 "seek_hole": false, 00:18:26.794 "seek_data": false, 00:18:26.794 "copy": true, 00:18:26.794 "nvme_iov_md": false 00:18:26.794 }, 00:18:26.794 "driver_specific": { 00:18:26.794 "nvme": [ 00:18:26.794 { 00:18:26.794 "pci_address": "0000:00:11.0", 00:18:26.794 "trid": { 00:18:26.794 "trtype": "PCIe", 00:18:26.794 "traddr": "0000:00:11.0" 00:18:26.794 }, 00:18:26.794 "ctrlr_data": { 00:18:26.794 "cntlid": 0, 00:18:26.794 "vendor_id": "0x1b36", 00:18:26.794 "model_number": "QEMU NVMe Ctrl", 00:18:26.794 "serial_number": "12341", 00:18:26.794 "firmware_revision": "8.0.0", 00:18:26.794 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:26.794 "oacs": { 00:18:26.794 "security": 0, 00:18:26.794 "format": 1, 00:18:26.794 "firmware": 0, 00:18:26.794 "ns_manage": 1 00:18:26.794 }, 00:18:26.794 "multi_ctrlr": false, 00:18:26.794 "ana_reporting": false 00:18:26.794 }, 00:18:26.794 "vs": { 00:18:26.794 "nvme_version": "1.4" 00:18:26.794 }, 00:18:26.794 "ns_data": { 00:18:26.794 "id": 1, 00:18:26.794 "can_share": false 00:18:26.794 } 00:18:26.794 } 00:18:26.794 ], 00:18:26.794 "mp_policy": "active_passive" 00:18:26.794 } 00:18:26.794 } 00:18:26.794 ]' 00:18:26.794 17:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:26.794 17:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:26.794 17:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:26.794 17:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:18:26.794 17:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:18:26.794 17:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:18:26.794 17:05:01 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:18:26.794 17:05:01 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:26.794 17:05:01 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:18:26.794 17:05:01 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:26.794 17:05:01 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:27.054 17:05:01 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:18:27.054 17:05:01 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:27.316 17:05:01 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=0422f55c-0785-4a1b-b23c-8810a5a64105 00:18:27.316 17:05:01 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0422f55c-0785-4a1b-b23c-8810a5a64105 00:18:27.578 17:05:01 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=360c1af7-e56a-436f-8fc2-8919fb854233 00:18:27.578 17:05:01 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 360c1af7-e56a-436f-8fc2-8919fb854233 00:18:27.578 17:05:01 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:18:27.578 17:05:01 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:27.578 17:05:01 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=360c1af7-e56a-436f-8fc2-8919fb854233 00:18:27.578 17:05:01 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:18:27.578 17:05:01 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 360c1af7-e56a-436f-8fc2-8919fb854233 00:18:27.578 17:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=360c1af7-e56a-436f-8fc2-8919fb854233 00:18:27.578 17:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:27.578 17:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:27.578 17:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:27.578 17:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 360c1af7-e56a-436f-8fc2-8919fb854233 00:18:27.578 17:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:27.578 { 00:18:27.578 "name": "360c1af7-e56a-436f-8fc2-8919fb854233", 00:18:27.578 "aliases": [ 00:18:27.578 "lvs/nvme0n1p0" 00:18:27.578 ], 00:18:27.578 "product_name": "Logical Volume", 00:18:27.578 "block_size": 4096, 00:18:27.578 "num_blocks": 26476544, 00:18:27.578 "uuid": "360c1af7-e56a-436f-8fc2-8919fb854233", 00:18:27.578 "assigned_rate_limits": { 00:18:27.578 "rw_ios_per_sec": 0, 00:18:27.578 "rw_mbytes_per_sec": 0, 00:18:27.578 "r_mbytes_per_sec": 0, 00:18:27.578 "w_mbytes_per_sec": 0 00:18:27.578 }, 00:18:27.578 "claimed": false, 00:18:27.578 "zoned": false, 00:18:27.578 "supported_io_types": { 00:18:27.578 "read": true, 00:18:27.578 "write": true, 00:18:27.578 "unmap": true, 00:18:27.578 "flush": false, 00:18:27.578 "reset": true, 00:18:27.578 "nvme_admin": false, 00:18:27.578 "nvme_io": false, 00:18:27.578 "nvme_io_md": false, 00:18:27.578 "write_zeroes": true, 00:18:27.578 "zcopy": false, 00:18:27.578 "get_zone_info": false, 00:18:27.578 "zone_management": false, 00:18:27.578 "zone_append": false, 00:18:27.578 "compare": false, 00:18:27.578 "compare_and_write": false, 00:18:27.578 "abort": false, 00:18:27.578 "seek_hole": true, 00:18:27.578 "seek_data": true, 00:18:27.578 "copy": false, 00:18:27.578 "nvme_iov_md": false 00:18:27.578 }, 00:18:27.578 "driver_specific": { 00:18:27.578 "lvol": { 00:18:27.578 "lvol_store_uuid": "0422f55c-0785-4a1b-b23c-8810a5a64105", 00:18:27.578 "base_bdev": "nvme0n1", 00:18:27.578 "thin_provision": true, 00:18:27.578 "num_allocated_clusters": 0, 00:18:27.578 "snapshot": false, 00:18:27.578 "clone": false, 00:18:27.578 "esnap_clone": false 00:18:27.578 } 00:18:27.578 } 00:18:27.578 } 00:18:27.578 ]' 00:18:27.578 17:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:27.839 17:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:27.839 17:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:27.839 17:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:27.839 17:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:27.839 17:05:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:27.839 17:05:01 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:18:27.839 17:05:01 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:18:27.839 17:05:01 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:28.098 17:05:02 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:28.098 17:05:02 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:28.098 17:05:02 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 360c1af7-e56a-436f-8fc2-8919fb854233 00:18:28.098 17:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=360c1af7-e56a-436f-8fc2-8919fb854233 00:18:28.098 17:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:28.098 17:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:28.098 17:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:28.098 17:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 360c1af7-e56a-436f-8fc2-8919fb854233 00:18:28.098 17:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:28.098 { 00:18:28.098 "name": "360c1af7-e56a-436f-8fc2-8919fb854233", 00:18:28.098 "aliases": [ 00:18:28.098 "lvs/nvme0n1p0" 00:18:28.098 ], 00:18:28.098 "product_name": "Logical Volume", 00:18:28.098 "block_size": 4096, 00:18:28.098 "num_blocks": 26476544, 00:18:28.098 "uuid": "360c1af7-e56a-436f-8fc2-8919fb854233", 00:18:28.098 "assigned_rate_limits": { 00:18:28.098 "rw_ios_per_sec": 0, 00:18:28.098 "rw_mbytes_per_sec": 0, 00:18:28.098 "r_mbytes_per_sec": 0, 00:18:28.098 "w_mbytes_per_sec": 0 00:18:28.098 }, 00:18:28.098 "claimed": false, 00:18:28.098 "zoned": false, 00:18:28.098 "supported_io_types": { 00:18:28.098 "read": true, 00:18:28.098 "write": true, 00:18:28.098 "unmap": true, 00:18:28.098 "flush": false, 00:18:28.098 "reset": true, 00:18:28.098 "nvme_admin": false, 00:18:28.098 "nvme_io": false, 00:18:28.098 "nvme_io_md": false, 00:18:28.098 "write_zeroes": true, 00:18:28.098 "zcopy": false, 00:18:28.098 "get_zone_info": false, 00:18:28.098 "zone_management": false, 00:18:28.098 "zone_append": false, 00:18:28.098 "compare": false, 00:18:28.098 "compare_and_write": false, 00:18:28.098 "abort": false, 00:18:28.098 "seek_hole": true, 00:18:28.098 "seek_data": true, 00:18:28.098 "copy": false, 00:18:28.098 "nvme_iov_md": false 00:18:28.098 }, 00:18:28.098 "driver_specific": { 00:18:28.098 "lvol": { 00:18:28.098 "lvol_store_uuid": "0422f55c-0785-4a1b-b23c-8810a5a64105", 00:18:28.098 "base_bdev": "nvme0n1", 00:18:28.098 "thin_provision": true, 00:18:28.098 "num_allocated_clusters": 0, 00:18:28.098 "snapshot": false, 00:18:28.098 "clone": false, 00:18:28.098 "esnap_clone": false 00:18:28.098 } 00:18:28.098 } 00:18:28.098 } 00:18:28.098 ]' 00:18:28.098 17:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:28.357 17:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:28.357 17:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:28.357 17:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:28.357 17:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:28.357 17:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:28.357 17:05:02 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:18:28.357 17:05:02 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:28.357 17:05:02 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:18:28.357 17:05:02 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:18:28.357 17:05:02 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:18:28.357 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:18:28.357 17:05:02 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 360c1af7-e56a-436f-8fc2-8919fb854233 00:18:28.357 17:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=360c1af7-e56a-436f-8fc2-8919fb854233 00:18:28.357 17:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:18:28.357 17:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:18:28.357 17:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:18:28.357 17:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 360c1af7-e56a-436f-8fc2-8919fb854233 00:18:28.616 17:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:18:28.616 { 00:18:28.616 "name": "360c1af7-e56a-436f-8fc2-8919fb854233", 00:18:28.616 "aliases": [ 00:18:28.616 "lvs/nvme0n1p0" 00:18:28.616 ], 00:18:28.616 "product_name": "Logical Volume", 00:18:28.616 "block_size": 4096, 00:18:28.616 "num_blocks": 26476544, 00:18:28.616 "uuid": "360c1af7-e56a-436f-8fc2-8919fb854233", 00:18:28.616 "assigned_rate_limits": { 00:18:28.616 "rw_ios_per_sec": 0, 00:18:28.616 "rw_mbytes_per_sec": 0, 00:18:28.616 "r_mbytes_per_sec": 0, 00:18:28.616 "w_mbytes_per_sec": 0 00:18:28.616 }, 00:18:28.616 "claimed": false, 00:18:28.616 "zoned": false, 00:18:28.616 "supported_io_types": { 00:18:28.616 "read": true, 00:18:28.616 "write": true, 00:18:28.616 "unmap": true, 00:18:28.616 "flush": false, 00:18:28.616 "reset": true, 00:18:28.616 "nvme_admin": false, 00:18:28.616 "nvme_io": false, 00:18:28.616 "nvme_io_md": false, 00:18:28.616 "write_zeroes": true, 00:18:28.616 "zcopy": false, 00:18:28.616 "get_zone_info": false, 00:18:28.616 "zone_management": false, 00:18:28.616 "zone_append": false, 00:18:28.616 "compare": false, 00:18:28.616 "compare_and_write": false, 00:18:28.616 "abort": false, 00:18:28.616 "seek_hole": true, 00:18:28.616 "seek_data": true, 00:18:28.616 "copy": false, 00:18:28.616 "nvme_iov_md": false 00:18:28.616 }, 00:18:28.616 "driver_specific": { 00:18:28.616 "lvol": { 00:18:28.616 "lvol_store_uuid": "0422f55c-0785-4a1b-b23c-8810a5a64105", 00:18:28.616 "base_bdev": "nvme0n1", 00:18:28.616 "thin_provision": true, 00:18:28.616 "num_allocated_clusters": 0, 00:18:28.616 "snapshot": false, 00:18:28.616 "clone": false, 00:18:28.616 "esnap_clone": false 00:18:28.616 } 00:18:28.616 } 00:18:28.616 } 00:18:28.616 ]' 00:18:28.616 17:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:18:28.616 17:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:18:28.616 17:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:18:28.616 17:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:18:28.616 17:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:18:28.616 17:05:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:18:28.616 17:05:02 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:18:28.616 17:05:02 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:18:28.616 17:05:02 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 360c1af7-e56a-436f-8fc2-8919fb854233 -c nvc0n1p0 --l2p_dram_limit 60 00:18:28.875 [2024-12-05 17:05:03.151682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.875 [2024-12-05 17:05:03.151722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:28.875 [2024-12-05 17:05:03.151735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:28.875 [2024-12-05 17:05:03.151742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.875 [2024-12-05 17:05:03.151796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.875 [2024-12-05 17:05:03.151806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:28.875 [2024-12-05 17:05:03.151815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:28.875 [2024-12-05 17:05:03.151821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.875 [2024-12-05 17:05:03.151856] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:28.875 [2024-12-05 17:05:03.152500] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:28.875 [2024-12-05 17:05:03.152517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.875 [2024-12-05 17:05:03.152523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:28.875 [2024-12-05 17:05:03.152531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:18:28.875 [2024-12-05 17:05:03.152537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.875 [2024-12-05 17:05:03.152600] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 50fda8c6-a663-4d0e-a308-4d9486c95ef2 00:18:28.875 [2024-12-05 17:05:03.153652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.875 [2024-12-05 17:05:03.153683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:28.875 [2024-12-05 17:05:03.153691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:28.875 [2024-12-05 17:05:03.153698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.875 [2024-12-05 17:05:03.158943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.875 [2024-12-05 17:05:03.158976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:28.875 [2024-12-05 17:05:03.158984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.184 ms 00:18:28.875 [2024-12-05 17:05:03.158991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.875 [2024-12-05 17:05:03.159084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.875 [2024-12-05 17:05:03.159093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:28.875 [2024-12-05 17:05:03.159100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:18:28.875 [2024-12-05 17:05:03.159110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.875 [2024-12-05 17:05:03.159158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.875 [2024-12-05 17:05:03.159167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:28.875 [2024-12-05 17:05:03.159173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:28.875 [2024-12-05 17:05:03.159181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.875 [2024-12-05 17:05:03.159205] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:28.875 [2024-12-05 17:05:03.162154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.875 [2024-12-05 17:05:03.162179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:28.875 [2024-12-05 17:05:03.162189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.952 ms 00:18:28.875 [2024-12-05 17:05:03.162196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.875 [2024-12-05 17:05:03.162232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.875 [2024-12-05 17:05:03.162238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:28.875 [2024-12-05 17:05:03.162246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:28.875 [2024-12-05 17:05:03.162251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.875 [2024-12-05 17:05:03.162274] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:28.875 [2024-12-05 17:05:03.162391] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:28.875 [2024-12-05 17:05:03.162403] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:28.875 [2024-12-05 17:05:03.162411] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:28.875 [2024-12-05 17:05:03.162420] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:28.875 [2024-12-05 17:05:03.162427] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:28.875 [2024-12-05 17:05:03.162436] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:28.875 [2024-12-05 17:05:03.162442] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:28.875 [2024-12-05 17:05:03.162448] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:28.875 [2024-12-05 17:05:03.162454] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:28.875 [2024-12-05 17:05:03.162461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.875 [2024-12-05 17:05:03.162468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:28.875 [2024-12-05 17:05:03.162475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.187 ms 00:18:28.876 [2024-12-05 17:05:03.162480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.876 [2024-12-05 17:05:03.162549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.876 [2024-12-05 17:05:03.162555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:28.876 [2024-12-05 17:05:03.162562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:28.876 [2024-12-05 17:05:03.162567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.876 [2024-12-05 17:05:03.162663] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:28.876 [2024-12-05 17:05:03.162670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:28.876 [2024-12-05 17:05:03.162679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:28.876 [2024-12-05 17:05:03.162685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.876 [2024-12-05 17:05:03.162693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:28.876 [2024-12-05 17:05:03.162698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:28.876 [2024-12-05 17:05:03.162704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:28.876 [2024-12-05 17:05:03.162710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:28.876 [2024-12-05 17:05:03.162720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:28.876 [2024-12-05 17:05:03.162725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:28.876 [2024-12-05 17:05:03.162732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:28.876 [2024-12-05 17:05:03.162738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:28.876 [2024-12-05 17:05:03.162744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:28.876 [2024-12-05 17:05:03.162749] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:28.876 [2024-12-05 17:05:03.162756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:28.876 [2024-12-05 17:05:03.162762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.876 [2024-12-05 17:05:03.162769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:28.876 [2024-12-05 17:05:03.162774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:28.876 [2024-12-05 17:05:03.162780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.876 [2024-12-05 17:05:03.162785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:28.876 [2024-12-05 17:05:03.162792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:28.876 [2024-12-05 17:05:03.162797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:28.876 [2024-12-05 17:05:03.162804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:28.876 [2024-12-05 17:05:03.162809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:28.876 [2024-12-05 17:05:03.162815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:28.876 [2024-12-05 17:05:03.162820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:28.876 [2024-12-05 17:05:03.162826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:28.876 [2024-12-05 17:05:03.162831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:28.876 [2024-12-05 17:05:03.162837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:28.876 [2024-12-05 17:05:03.162842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:28.876 [2024-12-05 17:05:03.162848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:28.876 [2024-12-05 17:05:03.162853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:28.876 [2024-12-05 17:05:03.162861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:28.876 [2024-12-05 17:05:03.162878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:28.876 [2024-12-05 17:05:03.162884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:28.876 [2024-12-05 17:05:03.162889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:28.876 [2024-12-05 17:05:03.162896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:28.876 [2024-12-05 17:05:03.162901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:28.876 [2024-12-05 17:05:03.162907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:28.876 [2024-12-05 17:05:03.162912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.876 [2024-12-05 17:05:03.162920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:28.876 [2024-12-05 17:05:03.162925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:28.876 [2024-12-05 17:05:03.162931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.876 [2024-12-05 17:05:03.162936] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:28.876 [2024-12-05 17:05:03.162943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:28.876 [2024-12-05 17:05:03.162964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:28.876 [2024-12-05 17:05:03.162973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:28.876 [2024-12-05 17:05:03.162979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:28.876 [2024-12-05 17:05:03.162988] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:28.876 [2024-12-05 17:05:03.162993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:28.876 [2024-12-05 17:05:03.163000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:28.876 [2024-12-05 17:05:03.163005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:28.876 [2024-12-05 17:05:03.163011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:28.876 [2024-12-05 17:05:03.163018] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:28.876 [2024-12-05 17:05:03.163026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:28.876 [2024-12-05 17:05:03.163033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:28.876 [2024-12-05 17:05:03.163040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:28.876 [2024-12-05 17:05:03.163046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:28.876 [2024-12-05 17:05:03.163059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:28.876 [2024-12-05 17:05:03.163065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:28.876 [2024-12-05 17:05:03.163073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:28.876 [2024-12-05 17:05:03.163078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:28.876 [2024-12-05 17:05:03.163085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:28.876 [2024-12-05 17:05:03.163090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:28.876 [2024-12-05 17:05:03.163098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:28.876 [2024-12-05 17:05:03.163104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:28.876 [2024-12-05 17:05:03.163110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:28.876 [2024-12-05 17:05:03.163115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:28.876 [2024-12-05 17:05:03.163122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:28.876 [2024-12-05 17:05:03.163128] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:28.876 [2024-12-05 17:05:03.163135] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:28.876 [2024-12-05 17:05:03.163143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:28.877 [2024-12-05 17:05:03.163152] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:28.877 [2024-12-05 17:05:03.163158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:28.877 [2024-12-05 17:05:03.163164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:28.877 [2024-12-05 17:05:03.163170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:28.877 [2024-12-05 17:05:03.163177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:28.877 [2024-12-05 17:05:03.163182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:18:28.877 [2024-12-05 17:05:03.163190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:28.877 [2024-12-05 17:05:03.163255] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:28.877 [2024-12-05 17:05:03.163265] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:32.161 [2024-12-05 17:05:06.044768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.161 [2024-12-05 17:05:06.044834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:32.161 [2024-12-05 17:05:06.044848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2881.500 ms 00:18:32.161 [2024-12-05 17:05:06.044859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.161 [2024-12-05 17:05:06.070487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.161 [2024-12-05 17:05:06.070662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:32.161 [2024-12-05 17:05:06.070680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.407 ms 00:18:32.161 [2024-12-05 17:05:06.070691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.161 [2024-12-05 17:05:06.070829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.161 [2024-12-05 17:05:06.070842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:32.161 [2024-12-05 17:05:06.070850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:32.161 [2024-12-05 17:05:06.070862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.161 [2024-12-05 17:05:06.116846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.161 [2024-12-05 17:05:06.116886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:32.161 [2024-12-05 17:05:06.116901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.935 ms 00:18:32.161 [2024-12-05 17:05:06.116912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.161 [2024-12-05 17:05:06.116969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.161 [2024-12-05 17:05:06.116981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:32.161 [2024-12-05 17:05:06.116990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:32.161 [2024-12-05 17:05:06.116998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.161 [2024-12-05 17:05:06.117373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.161 [2024-12-05 17:05:06.117393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:32.161 [2024-12-05 17:05:06.117402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:18:32.161 [2024-12-05 17:05:06.117413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.161 [2024-12-05 17:05:06.117537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.161 [2024-12-05 17:05:06.117546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:32.161 [2024-12-05 17:05:06.117555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:18:32.161 [2024-12-05 17:05:06.117566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.161 [2024-12-05 17:05:06.132105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.161 [2024-12-05 17:05:06.132254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:32.161 [2024-12-05 17:05:06.132270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.515 ms 00:18:32.161 [2024-12-05 17:05:06.132280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.161 [2024-12-05 17:05:06.143606] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:32.161 [2024-12-05 17:05:06.158220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.161 [2024-12-05 17:05:06.158251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:32.161 [2024-12-05 17:05:06.158265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.852 ms 00:18:32.161 [2024-12-05 17:05:06.158273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.161 [2024-12-05 17:05:06.213375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.161 [2024-12-05 17:05:06.213517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:32.161 [2024-12-05 17:05:06.213540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.066 ms 00:18:32.161 [2024-12-05 17:05:06.213549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.161 [2024-12-05 17:05:06.213717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.161 [2024-12-05 17:05:06.213727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:32.161 [2024-12-05 17:05:06.213739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:18:32.161 [2024-12-05 17:05:06.213747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.161 [2024-12-05 17:05:06.237214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.161 [2024-12-05 17:05:06.237250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:32.161 [2024-12-05 17:05:06.237263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.413 ms 00:18:32.161 [2024-12-05 17:05:06.237271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.161 [2024-12-05 17:05:06.259585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.161 [2024-12-05 17:05:06.259614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:32.161 [2024-12-05 17:05:06.259627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.279 ms 00:18:32.161 [2024-12-05 17:05:06.259634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.161 [2024-12-05 17:05:06.260199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.161 [2024-12-05 17:05:06.260210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:32.161 [2024-12-05 17:05:06.260220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:18:32.161 [2024-12-05 17:05:06.260227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.161 [2024-12-05 17:05:06.330254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.161 [2024-12-05 17:05:06.330288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:32.161 [2024-12-05 17:05:06.330303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.970 ms 00:18:32.161 [2024-12-05 17:05:06.330314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.161 [2024-12-05 17:05:06.354174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.161 [2024-12-05 17:05:06.354214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:32.161 [2024-12-05 17:05:06.354227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.765 ms 00:18:32.161 [2024-12-05 17:05:06.354234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.161 [2024-12-05 17:05:06.377245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.161 [2024-12-05 17:05:06.377275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:32.161 [2024-12-05 17:05:06.377287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.971 ms 00:18:32.161 [2024-12-05 17:05:06.377294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.161 [2024-12-05 17:05:06.400670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.161 [2024-12-05 17:05:06.400705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:32.161 [2024-12-05 17:05:06.400718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.332 ms 00:18:32.161 [2024-12-05 17:05:06.400724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.161 [2024-12-05 17:05:06.400772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.161 [2024-12-05 17:05:06.400781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:32.161 [2024-12-05 17:05:06.400795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:32.161 [2024-12-05 17:05:06.400802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.161 [2024-12-05 17:05:06.400888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.161 [2024-12-05 17:05:06.400898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:32.161 [2024-12-05 17:05:06.400907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:32.161 [2024-12-05 17:05:06.400915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.161 [2024-12-05 17:05:06.401902] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3249.810 ms, result 0 00:18:32.161 { 00:18:32.161 "name": "ftl0", 00:18:32.161 "uuid": "50fda8c6-a663-4d0e-a308-4d9486c95ef2" 00:18:32.161 } 00:18:32.161 17:05:06 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:18:32.161 17:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:18:32.161 17:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:32.161 17:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:18:32.161 17:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:32.161 17:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:32.161 17:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:32.419 17:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:32.677 [ 00:18:32.677 { 00:18:32.677 "name": "ftl0", 00:18:32.677 "aliases": [ 00:18:32.677 "50fda8c6-a663-4d0e-a308-4d9486c95ef2" 00:18:32.677 ], 00:18:32.677 "product_name": "FTL disk", 00:18:32.677 "block_size": 4096, 00:18:32.677 "num_blocks": 20971520, 00:18:32.677 "uuid": "50fda8c6-a663-4d0e-a308-4d9486c95ef2", 00:18:32.677 "assigned_rate_limits": { 00:18:32.677 "rw_ios_per_sec": 0, 00:18:32.677 "rw_mbytes_per_sec": 0, 00:18:32.677 "r_mbytes_per_sec": 0, 00:18:32.677 "w_mbytes_per_sec": 0 00:18:32.677 }, 00:18:32.677 "claimed": false, 00:18:32.677 "zoned": false, 00:18:32.677 "supported_io_types": { 00:18:32.677 "read": true, 00:18:32.677 "write": true, 00:18:32.677 "unmap": true, 00:18:32.677 "flush": true, 00:18:32.677 "reset": false, 00:18:32.677 "nvme_admin": false, 00:18:32.677 "nvme_io": false, 00:18:32.677 "nvme_io_md": false, 00:18:32.677 "write_zeroes": true, 00:18:32.677 "zcopy": false, 00:18:32.677 "get_zone_info": false, 00:18:32.677 "zone_management": false, 00:18:32.677 "zone_append": false, 00:18:32.677 "compare": false, 00:18:32.677 "compare_and_write": false, 00:18:32.677 "abort": false, 00:18:32.677 "seek_hole": false, 00:18:32.677 "seek_data": false, 00:18:32.677 "copy": false, 00:18:32.677 "nvme_iov_md": false 00:18:32.677 }, 00:18:32.677 "driver_specific": { 00:18:32.677 "ftl": { 00:18:32.677 "base_bdev": "360c1af7-e56a-436f-8fc2-8919fb854233", 00:18:32.677 "cache": "nvc0n1p0" 00:18:32.677 } 00:18:32.677 } 00:18:32.677 } 00:18:32.677 ] 00:18:32.677 17:05:06 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:18:32.677 17:05:06 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:18:32.677 17:05:06 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:32.677 17:05:07 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:18:32.677 17:05:07 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:32.935 [2024-12-05 17:05:07.206938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.935 [2024-12-05 17:05:07.206994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:32.935 [2024-12-05 17:05:07.207006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:32.935 [2024-12-05 17:05:07.207016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.935 [2024-12-05 17:05:07.207053] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:32.935 [2024-12-05 17:05:07.209676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.935 [2024-12-05 17:05:07.209705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:32.935 [2024-12-05 17:05:07.209717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.606 ms 00:18:32.935 [2024-12-05 17:05:07.209725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.935 [2024-12-05 17:05:07.210195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.935 [2024-12-05 17:05:07.210212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:32.935 [2024-12-05 17:05:07.210222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:18:32.935 [2024-12-05 17:05:07.210230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.935 [2024-12-05 17:05:07.213471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.935 [2024-12-05 17:05:07.213495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:32.935 [2024-12-05 17:05:07.213507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.217 ms 00:18:32.935 [2024-12-05 17:05:07.213516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.935 [2024-12-05 17:05:07.219710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.935 [2024-12-05 17:05:07.219733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:32.935 [2024-12-05 17:05:07.219745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.168 ms 00:18:32.935 [2024-12-05 17:05:07.219754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.935 [2024-12-05 17:05:07.240933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.935 [2024-12-05 17:05:07.240972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:32.935 [2024-12-05 17:05:07.240993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.094 ms 00:18:32.935 [2024-12-05 17:05:07.240999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.935 [2024-12-05 17:05:07.253542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.935 [2024-12-05 17:05:07.253573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:32.935 [2024-12-05 17:05:07.253587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.501 ms 00:18:32.935 [2024-12-05 17:05:07.253593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.935 [2024-12-05 17:05:07.253737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.935 [2024-12-05 17:05:07.253745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:32.935 [2024-12-05 17:05:07.253753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:18:32.935 [2024-12-05 17:05:07.253759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.935 [2024-12-05 17:05:07.271908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.935 [2024-12-05 17:05:07.271935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:32.935 [2024-12-05 17:05:07.271945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.119 ms 00:18:32.935 [2024-12-05 17:05:07.271967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:32.935 [2024-12-05 17:05:07.289542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:32.935 [2024-12-05 17:05:07.289652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:32.935 [2024-12-05 17:05:07.289668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.538 ms 00:18:32.935 [2024-12-05 17:05:07.289674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.195 [2024-12-05 17:05:07.306820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.195 [2024-12-05 17:05:07.306846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:33.195 [2024-12-05 17:05:07.306856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.110 ms 00:18:33.195 [2024-12-05 17:05:07.306861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.195 [2024-12-05 17:05:07.324375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.195 [2024-12-05 17:05:07.324474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:33.195 [2024-12-05 17:05:07.324489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.430 ms 00:18:33.195 [2024-12-05 17:05:07.324494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.195 [2024-12-05 17:05:07.324527] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:33.195 [2024-12-05 17:05:07.324537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:33.195 [2024-12-05 17:05:07.324863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.324870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.324876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.324883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.324888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.324895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.324900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.324909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.324915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.324921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.324927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.324934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.324942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.324959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.324965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.324972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.324978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.324985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.324991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.324998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:33.196 [2024-12-05 17:05:07.325231] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:33.196 [2024-12-05 17:05:07.325238] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 50fda8c6-a663-4d0e-a308-4d9486c95ef2 00:18:33.196 [2024-12-05 17:05:07.325244] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:33.196 [2024-12-05 17:05:07.325253] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:33.196 [2024-12-05 17:05:07.325258] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:33.196 [2024-12-05 17:05:07.325267] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:33.196 [2024-12-05 17:05:07.325272] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:33.196 [2024-12-05 17:05:07.325279] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:33.196 [2024-12-05 17:05:07.325284] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:33.196 [2024-12-05 17:05:07.325291] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:33.196 [2024-12-05 17:05:07.325295] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:33.196 [2024-12-05 17:05:07.325302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.196 [2024-12-05 17:05:07.325308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:33.196 [2024-12-05 17:05:07.325315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.777 ms 00:18:33.196 [2024-12-05 17:05:07.325321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.196 [2024-12-05 17:05:07.334966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.196 [2024-12-05 17:05:07.334991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:33.196 [2024-12-05 17:05:07.335000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.612 ms 00:18:33.196 [2024-12-05 17:05:07.335005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.196 [2024-12-05 17:05:07.335285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.196 [2024-12-05 17:05:07.335292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:33.196 [2024-12-05 17:05:07.335300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:18:33.196 [2024-12-05 17:05:07.335305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.196 [2024-12-05 17:05:07.370181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.196 [2024-12-05 17:05:07.370290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:33.196 [2024-12-05 17:05:07.370305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.196 [2024-12-05 17:05:07.370311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.196 [2024-12-05 17:05:07.370362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.196 [2024-12-05 17:05:07.370368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:33.196 [2024-12-05 17:05:07.370376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.196 [2024-12-05 17:05:07.370381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.196 [2024-12-05 17:05:07.370462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.196 [2024-12-05 17:05:07.370472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:33.196 [2024-12-05 17:05:07.370479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.196 [2024-12-05 17:05:07.370485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.196 [2024-12-05 17:05:07.370512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.196 [2024-12-05 17:05:07.370518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:33.196 [2024-12-05 17:05:07.370525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.196 [2024-12-05 17:05:07.370531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.196 [2024-12-05 17:05:07.434478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.196 [2024-12-05 17:05:07.434514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:33.196 [2024-12-05 17:05:07.434525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.196 [2024-12-05 17:05:07.434532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.196 [2024-12-05 17:05:07.483758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.196 [2024-12-05 17:05:07.483790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:33.196 [2024-12-05 17:05:07.483801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.196 [2024-12-05 17:05:07.483807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.196 [2024-12-05 17:05:07.483873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.196 [2024-12-05 17:05:07.483881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:33.196 [2024-12-05 17:05:07.483891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.196 [2024-12-05 17:05:07.483897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.196 [2024-12-05 17:05:07.483983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.196 [2024-12-05 17:05:07.483991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:33.196 [2024-12-05 17:05:07.483999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.196 [2024-12-05 17:05:07.484005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.196 [2024-12-05 17:05:07.484095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.196 [2024-12-05 17:05:07.484103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:33.196 [2024-12-05 17:05:07.484111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.196 [2024-12-05 17:05:07.484118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.196 [2024-12-05 17:05:07.484157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.196 [2024-12-05 17:05:07.484164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:33.196 [2024-12-05 17:05:07.484172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.196 [2024-12-05 17:05:07.484177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.196 [2024-12-05 17:05:07.484215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.196 [2024-12-05 17:05:07.484221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:33.196 [2024-12-05 17:05:07.484228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.196 [2024-12-05 17:05:07.484236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.196 [2024-12-05 17:05:07.484282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.196 [2024-12-05 17:05:07.484289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:33.196 [2024-12-05 17:05:07.484296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.196 [2024-12-05 17:05:07.484302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.196 [2024-12-05 17:05:07.484440] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 277.479 ms, result 0 00:18:33.196 true 00:18:33.196 17:05:07 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75079 00:18:33.196 17:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75079 ']' 00:18:33.196 17:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75079 00:18:33.196 17:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:18:33.196 17:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.196 17:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75079 00:18:33.196 killing process with pid 75079 00:18:33.196 17:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:33.196 17:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:33.196 17:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75079' 00:18:33.196 17:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75079 00:18:33.196 17:05:07 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75079 00:18:39.798 17:05:13 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:39.798 17:05:13 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:39.798 17:05:13 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:39.798 17:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:39.798 17:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:39.798 17:05:13 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:39.798 17:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:39.798 17:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:39.798 17:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:39.798 17:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:39.798 17:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:39.798 17:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:39.799 17:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:39.799 17:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:39.799 17:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:39.799 17:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:39.799 17:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:39.799 17:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:39.799 17:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:39.799 17:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:39.799 17:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:39.799 17:05:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:39.799 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:39.799 fio-3.35 00:18:39.799 Starting 1 thread 00:18:45.060 00:18:45.060 test: (groupid=0, jobs=1): err= 0: pid=75264: Thu Dec 5 17:05:18 2024 00:18:45.060 read: IOPS=817, BW=54.3MiB/s (56.9MB/s)(255MiB/4691msec) 00:18:45.060 slat (nsec): min=4135, max=23635, avg=5484.70, stdev=1880.18 00:18:45.060 clat (usec): min=298, max=2192, avg=557.21, stdev=160.59 00:18:45.060 lat (usec): min=303, max=2197, avg=562.69, stdev=160.71 00:18:45.060 clat percentiles (usec): 00:18:45.060 | 1.00th=[ 310], 5.00th=[ 322], 10.00th=[ 404], 20.00th=[ 445], 00:18:45.060 | 30.00th=[ 478], 40.00th=[ 490], 50.00th=[ 545], 60.00th=[ 553], 00:18:45.060 | 70.00th=[ 562], 80.00th=[ 627], 90.00th=[ 832], 95.00th=[ 889], 00:18:45.060 | 99.00th=[ 996], 99.50th=[ 1057], 99.90th=[ 1156], 99.95th=[ 1336], 00:18:45.060 | 99.99th=[ 2180] 00:18:45.060 write: IOPS=822, BW=54.6MiB/s (57.3MB/s)(256MiB/4686msec); 0 zone resets 00:18:45.060 slat (nsec): min=14592, max=60273, avg=19636.40, stdev=3347.10 00:18:45.060 clat (usec): min=315, max=1280, avg=626.93, stdev=159.01 00:18:45.060 lat (usec): min=334, max=1300, avg=646.57, stdev=158.72 00:18:45.060 clat percentiles (usec): 00:18:45.060 | 1.00th=[ 326], 5.00th=[ 400], 10.00th=[ 494], 20.00th=[ 510], 00:18:45.060 | 30.00th=[ 570], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 635], 00:18:45.060 | 70.00th=[ 644], 80.00th=[ 660], 90.00th=[ 914], 95.00th=[ 930], 00:18:45.060 | 99.00th=[ 1074], 99.50th=[ 1188], 99.90th=[ 1270], 99.95th=[ 1270], 00:18:45.060 | 99.99th=[ 1287] 00:18:45.060 bw ( KiB/s): min=43520, max=63104, per=99.06%, avg=55427.56, stdev=6087.09, samples=9 00:18:45.060 iops : min= 640, max= 928, avg=815.11, stdev=89.52, samples=9 00:18:45.060 lat (usec) : 500=27.74%, 750=55.61%, 1000=15.07% 00:18:45.060 lat (msec) : 2=1.56%, 4=0.01% 00:18:45.060 cpu : usr=99.32%, sys=0.04%, ctx=6, majf=0, minf=1169 00:18:45.060 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:45.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.060 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.060 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.060 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:45.060 00:18:45.060 Run status group 0 (all jobs): 00:18:45.060 READ: bw=54.3MiB/s (56.9MB/s), 54.3MiB/s-54.3MiB/s (56.9MB/s-56.9MB/s), io=255MiB (267MB), run=4691-4691msec 00:18:45.060 WRITE: bw=54.6MiB/s (57.3MB/s), 54.6MiB/s-54.6MiB/s (57.3MB/s-57.3MB/s), io=256MiB (269MB), run=4686-4686msec 00:18:46.444 ----------------------------------------------------- 00:18:46.445 Suppressions used: 00:18:46.445 count bytes template 00:18:46.445 1 5 /usr/src/fio/parse.c 00:18:46.445 1 8 libtcmalloc_minimal.so 00:18:46.445 1 904 libcrypto.so 00:18:46.445 ----------------------------------------------------- 00:18:46.445 00:18:46.445 17:05:20 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:46.445 17:05:20 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:46.445 17:05:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:46.445 17:05:20 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:46.445 17:05:20 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:46.445 17:05:20 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:46.445 17:05:20 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:46.445 17:05:20 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:46.445 17:05:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:46.445 17:05:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:46.445 17:05:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:46.445 17:05:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:46.445 17:05:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:46.445 17:05:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:46.445 17:05:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:46.445 17:05:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:46.445 17:05:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:46.445 17:05:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:46.445 17:05:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:46.445 17:05:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:46.445 17:05:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:46.445 17:05:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:46.445 17:05:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:46.445 17:05:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:46.445 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:46.445 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:46.445 fio-3.35 00:18:46.445 Starting 2 threads 00:19:13.011 00:19:13.011 first_half: (groupid=0, jobs=1): err= 0: pid=75367: Thu Dec 5 17:05:45 2024 00:19:13.011 read: IOPS=2766, BW=10.8MiB/s (11.3MB/s)(255MiB/23580msec) 00:19:13.011 slat (nsec): min=3092, max=22600, avg=4285.71, stdev=1014.63 00:19:13.011 clat (usec): min=573, max=433042, avg=33859.08, stdev=19093.72 00:19:13.011 lat (usec): min=576, max=433049, avg=33863.37, stdev=19093.82 00:19:13.011 clat percentiles (msec): 00:19:13.011 | 1.00th=[ 4], 5.00th=[ 30], 10.00th=[ 30], 20.00th=[ 31], 00:19:13.011 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 32], 00:19:13.011 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 38], 95.00th=[ 42], 00:19:13.011 | 99.00th=[ 117], 99.50th=[ 150], 99.90th=[ 313], 99.95th=[ 380], 00:19:13.011 | 99.99th=[ 426] 00:19:13.011 write: IOPS=3524, BW=13.8MiB/s (14.4MB/s)(256MiB/18594msec); 0 zone resets 00:19:13.011 slat (usec): min=3, max=1148, avg= 6.13, stdev= 7.85 00:19:13.011 clat (usec): min=378, max=91293, avg=12332.31, stdev=20223.85 00:19:13.011 lat (usec): min=386, max=91300, avg=12338.44, stdev=20223.92 00:19:13.011 clat percentiles (usec): 00:19:13.011 | 1.00th=[ 660], 5.00th=[ 766], 10.00th=[ 881], 20.00th=[ 1090], 00:19:13.011 | 30.00th=[ 1287], 40.00th=[ 2311], 50.00th=[ 3720], 60.00th=[ 5014], 00:19:13.011 | 70.00th=[ 8094], 80.00th=[14615], 90.00th=[60031], 95.00th=[64750], 00:19:13.011 | 99.00th=[70779], 99.50th=[73925], 99.90th=[84411], 99.95th=[88605], 00:19:13.011 | 99.99th=[90702] 00:19:13.011 bw ( KiB/s): min= 31, max=51864, per=74.37%, avg=20969.24, stdev=14763.81, samples=25 00:19:13.011 iops : min= 7, max=12966, avg=5242.28, stdev=3691.00, samples=25 00:19:13.011 lat (usec) : 500=0.02%, 750=2.13%, 1000=5.63% 00:19:13.011 lat (msec) : 2=11.85%, 4=6.88%, 10=10.55%, 20=7.08%, 50=48.18% 00:19:13.011 lat (msec) : 100=6.99%, 250=0.60%, 500=0.08% 00:19:13.011 cpu : usr=99.21%, sys=0.13%, ctx=33, majf=0, minf=5529 00:19:13.011 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:13.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:13.011 issued rwts: total=65235,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.011 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:13.011 second_half: (groupid=0, jobs=1): err= 0: pid=75368: Thu Dec 5 17:05:45 2024 00:19:13.011 read: IOPS=2749, BW=10.7MiB/s (11.3MB/s)(254MiB/23694msec) 00:19:13.011 slat (nsec): min=3148, max=53594, avg=4999.19, stdev=932.17 00:19:13.011 clat (usec): min=675, max=334005, avg=33387.96, stdev=15396.33 00:19:13.011 lat (usec): min=680, max=334010, avg=33392.95, stdev=15396.38 00:19:13.011 clat percentiles (msec): 00:19:13.011 | 1.00th=[ 7], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 31], 00:19:13.011 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 32], 00:19:13.011 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 37], 95.00th=[ 41], 00:19:13.011 | 99.00th=[ 128], 99.50th=[ 142], 99.90th=[ 167], 99.95th=[ 205], 00:19:13.011 | 99.99th=[ 305] 00:19:13.011 write: IOPS=4377, BW=17.1MiB/s (17.9MB/s)(256MiB/14972msec); 0 zone resets 00:19:13.011 slat (usec): min=3, max=250, avg= 6.84, stdev= 2.94 00:19:13.011 clat (usec): min=399, max=91340, avg=13069.37, stdev=20662.12 00:19:13.011 lat (usec): min=405, max=91348, avg=13076.22, stdev=20662.20 00:19:13.011 clat percentiles (usec): 00:19:13.011 | 1.00th=[ 652], 5.00th=[ 750], 10.00th=[ 848], 20.00th=[ 1004], 00:19:13.011 | 30.00th=[ 1156], 40.00th=[ 1385], 50.00th=[ 2868], 60.00th=[ 5080], 00:19:13.011 | 70.00th=[11994], 80.00th=[16450], 90.00th=[60556], 95.00th=[65274], 00:19:13.011 | 99.00th=[71828], 99.50th=[73925], 99.90th=[84411], 99.95th=[88605], 00:19:13.011 | 99.99th=[90702] 00:19:13.011 bw ( KiB/s): min= 1352, max=47544, per=84.47%, avg=23819.68, stdev=12862.80, samples=22 00:19:13.011 iops : min= 338, max=11886, avg=5954.86, stdev=3215.66, samples=22 00:19:13.011 lat (usec) : 500=0.01%, 750=2.43%, 1000=7.57% 00:19:13.011 lat (msec) : 2=13.23%, 4=5.15%, 10=6.14%, 20=8.85%, 50=49.04% 00:19:13.011 lat (msec) : 100=6.83%, 250=0.75%, 500=0.02% 00:19:13.011 cpu : usr=99.38%, sys=0.09%, ctx=50, majf=0, minf=5570 00:19:13.011 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:19:13.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:13.011 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:13.011 issued rwts: total=65147,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:13.011 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:13.011 00:19:13.011 Run status group 0 (all jobs): 00:19:13.011 READ: bw=21.5MiB/s (22.5MB/s), 10.7MiB/s-10.8MiB/s (11.3MB/s-11.3MB/s), io=509MiB (534MB), run=23580-23694msec 00:19:13.011 WRITE: bw=27.5MiB/s (28.9MB/s), 13.8MiB/s-17.1MiB/s (14.4MB/s-17.9MB/s), io=512MiB (537MB), run=14972-18594msec 00:19:13.011 ----------------------------------------------------- 00:19:13.011 Suppressions used: 00:19:13.011 count bytes template 00:19:13.011 2 10 /usr/src/fio/parse.c 00:19:13.011 1 96 /usr/src/fio/iolog.c 00:19:13.011 1 8 libtcmalloc_minimal.so 00:19:13.011 1 904 libcrypto.so 00:19:13.011 ----------------------------------------------------- 00:19:13.011 00:19:13.011 17:05:46 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:19:13.011 17:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:13.011 17:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:13.011 17:05:46 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:19:13.011 17:05:46 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:19:13.011 17:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:13.011 17:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:13.011 17:05:46 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:13.011 17:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:13.011 17:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:13.011 17:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:13.011 17:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:13.011 17:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:13.011 17:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:19:13.011 17:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:13.011 17:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:13.011 17:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:13.011 17:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:19:13.011 17:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:13.011 17:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:13.011 17:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:13.011 17:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:19:13.011 17:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:13.011 17:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:19:13.011 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:19:13.011 fio-3.35 00:19:13.011 Starting 1 thread 00:19:31.130 00:19:31.130 test: (groupid=0, jobs=1): err= 0: pid=75677: Thu Dec 5 17:06:02 2024 00:19:31.130 read: IOPS=7915, BW=30.9MiB/s (32.4MB/s)(255MiB/8237msec) 00:19:31.130 slat (nsec): min=3035, max=32809, avg=3557.62, stdev=691.36 00:19:31.130 clat (usec): min=488, max=31206, avg=16162.42, stdev=2253.50 00:19:31.130 lat (usec): min=494, max=31209, avg=16165.97, stdev=2253.52 00:19:31.130 clat percentiles (usec): 00:19:31.130 | 1.00th=[14091], 5.00th=[14484], 10.00th=[14615], 20.00th=[14877], 00:19:31.130 | 30.00th=[15008], 40.00th=[15139], 50.00th=[15401], 60.00th=[15533], 00:19:31.130 | 70.00th=[15926], 80.00th=[16712], 90.00th=[19530], 95.00th=[21365], 00:19:31.130 | 99.00th=[23987], 99.50th=[25560], 99.90th=[28967], 99.95th=[29492], 00:19:31.130 | 99.99th=[30016] 00:19:31.130 write: IOPS=10.7k, BW=41.7MiB/s (43.7MB/s)(256MiB/6144msec); 0 zone resets 00:19:31.130 slat (usec): min=4, max=485, avg= 7.28, stdev= 5.33 00:19:31.130 clat (usec): min=522, max=63599, avg=11948.21, stdev=13651.72 00:19:31.130 lat (usec): min=528, max=63606, avg=11955.49, stdev=13651.81 00:19:31.130 clat percentiles (usec): 00:19:31.130 | 1.00th=[ 799], 5.00th=[ 1037], 10.00th=[ 1237], 20.00th=[ 1565], 00:19:31.130 | 30.00th=[ 1893], 40.00th=[ 2802], 50.00th=[ 8291], 60.00th=[10552], 00:19:31.130 | 70.00th=[13173], 80.00th=[15795], 90.00th=[36439], 95.00th=[43254], 00:19:31.130 | 99.00th=[54789], 99.50th=[57410], 99.90th=[60556], 99.95th=[61080], 00:19:31.130 | 99.99th=[63177] 00:19:31.130 bw ( KiB/s): min=11808, max=55128, per=94.52%, avg=40329.85, stdev=11013.04, samples=13 00:19:31.130 iops : min= 2952, max=13782, avg=10082.46, stdev=2753.26, samples=13 00:19:31.130 lat (usec) : 500=0.01%, 750=0.32%, 1000=1.73% 00:19:31.130 lat (msec) : 2=14.10%, 4=4.74%, 10=8.09%, 20=58.55%, 50=11.33% 00:19:31.130 lat (msec) : 100=1.14% 00:19:31.130 cpu : usr=99.07%, sys=0.19%, ctx=21, majf=0, minf=5565 00:19:31.130 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:31.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.130 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:31.130 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.130 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:31.130 00:19:31.130 Run status group 0 (all jobs): 00:19:31.130 READ: bw=30.9MiB/s (32.4MB/s), 30.9MiB/s-30.9MiB/s (32.4MB/s-32.4MB/s), io=255MiB (267MB), run=8237-8237msec 00:19:31.130 WRITE: bw=41.7MiB/s (43.7MB/s), 41.7MiB/s-41.7MiB/s (43.7MB/s-43.7MB/s), io=256MiB (268MB), run=6144-6144msec 00:19:31.130 ----------------------------------------------------- 00:19:31.130 Suppressions used: 00:19:31.130 count bytes template 00:19:31.130 1 5 /usr/src/fio/parse.c 00:19:31.130 2 192 /usr/src/fio/iolog.c 00:19:31.130 1 8 libtcmalloc_minimal.so 00:19:31.130 1 904 libcrypto.so 00:19:31.130 ----------------------------------------------------- 00:19:31.130 00:19:31.130 17:06:04 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:31.130 17:06:04 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:31.130 17:06:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:31.130 17:06:04 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:31.130 Remove shared memory files 00:19:31.130 17:06:04 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:31.130 17:06:04 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:31.130 17:06:04 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:31.130 17:06:04 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:31.130 17:06:04 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57090 /dev/shm/spdk_tgt_trace.pid73996 00:19:31.130 17:06:04 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:31.130 17:06:04 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:31.130 ************************************ 00:19:31.130 END TEST ftl_fio_basic 00:19:31.130 ************************************ 00:19:31.130 00:19:31.130 real 1m4.581s 00:19:31.130 user 2m12.749s 00:19:31.130 sys 0m11.779s 00:19:31.130 17:06:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:31.130 17:06:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:31.130 17:06:04 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:31.130 17:06:04 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:31.130 17:06:04 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:31.130 17:06:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:31.130 ************************************ 00:19:31.130 START TEST ftl_bdevperf 00:19:31.130 ************************************ 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:31.130 * Looking for test storage... 00:19:31.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:31.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.130 --rc genhtml_branch_coverage=1 00:19:31.130 --rc genhtml_function_coverage=1 00:19:31.130 --rc genhtml_legend=1 00:19:31.130 --rc geninfo_all_blocks=1 00:19:31.130 --rc geninfo_unexecuted_blocks=1 00:19:31.130 00:19:31.130 ' 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:31.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.130 --rc genhtml_branch_coverage=1 00:19:31.130 --rc genhtml_function_coverage=1 00:19:31.130 --rc genhtml_legend=1 00:19:31.130 --rc geninfo_all_blocks=1 00:19:31.130 --rc geninfo_unexecuted_blocks=1 00:19:31.130 00:19:31.130 ' 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:31.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.130 --rc genhtml_branch_coverage=1 00:19:31.130 --rc genhtml_function_coverage=1 00:19:31.130 --rc genhtml_legend=1 00:19:31.130 --rc geninfo_all_blocks=1 00:19:31.130 --rc geninfo_unexecuted_blocks=1 00:19:31.130 00:19:31.130 ' 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:31.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.130 --rc genhtml_branch_coverage=1 00:19:31.130 --rc genhtml_function_coverage=1 00:19:31.130 --rc genhtml_legend=1 00:19:31.130 --rc geninfo_all_blocks=1 00:19:31.130 --rc geninfo_unexecuted_blocks=1 00:19:31.130 00:19:31.130 ' 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:31.130 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:31.131 17:06:04 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:31.131 17:06:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:31.131 17:06:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:31.131 17:06:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:31.131 17:06:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:31.131 17:06:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:31.131 17:06:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=75925 00:19:31.131 17:06:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:31.131 17:06:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:31.131 17:06:04 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 75925 00:19:31.131 17:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 75925 ']' 00:19:31.131 17:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.131 17:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.131 17:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.131 17:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.131 17:06:04 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:31.131 [2024-12-05 17:06:04.423564] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:19:31.131 [2024-12-05 17:06:04.423837] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75925 ] 00:19:31.131 [2024-12-05 17:06:04.587458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.131 [2024-12-05 17:06:04.707439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.131 17:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.131 17:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:19:31.131 17:06:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:31.131 17:06:05 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:31.131 17:06:05 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:31.131 17:06:05 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:31.131 17:06:05 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:31.131 17:06:05 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:31.390 17:06:05 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:31.390 17:06:05 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:31.390 17:06:05 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:31.390 17:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:31.390 17:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:31.390 17:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:31.390 17:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:31.390 17:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:31.650 17:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:31.650 { 00:19:31.650 "name": "nvme0n1", 00:19:31.650 "aliases": [ 00:19:31.650 "05cd8533-f03e-4a88-a321-be1476ad1ca3" 00:19:31.650 ], 00:19:31.650 "product_name": "NVMe disk", 00:19:31.650 "block_size": 4096, 00:19:31.650 "num_blocks": 1310720, 00:19:31.650 "uuid": "05cd8533-f03e-4a88-a321-be1476ad1ca3", 00:19:31.650 "numa_id": -1, 00:19:31.650 "assigned_rate_limits": { 00:19:31.650 "rw_ios_per_sec": 0, 00:19:31.650 "rw_mbytes_per_sec": 0, 00:19:31.650 "r_mbytes_per_sec": 0, 00:19:31.650 "w_mbytes_per_sec": 0 00:19:31.650 }, 00:19:31.650 "claimed": true, 00:19:31.650 "claim_type": "read_many_write_one", 00:19:31.650 "zoned": false, 00:19:31.650 "supported_io_types": { 00:19:31.650 "read": true, 00:19:31.650 "write": true, 00:19:31.650 "unmap": true, 00:19:31.650 "flush": true, 00:19:31.650 "reset": true, 00:19:31.650 "nvme_admin": true, 00:19:31.650 "nvme_io": true, 00:19:31.650 "nvme_io_md": false, 00:19:31.650 "write_zeroes": true, 00:19:31.650 "zcopy": false, 00:19:31.650 "get_zone_info": false, 00:19:31.650 "zone_management": false, 00:19:31.650 "zone_append": false, 00:19:31.650 "compare": true, 00:19:31.650 "compare_and_write": false, 00:19:31.650 "abort": true, 00:19:31.650 "seek_hole": false, 00:19:31.650 "seek_data": false, 00:19:31.650 "copy": true, 00:19:31.650 "nvme_iov_md": false 00:19:31.650 }, 00:19:31.650 "driver_specific": { 00:19:31.650 "nvme": [ 00:19:31.650 { 00:19:31.650 "pci_address": "0000:00:11.0", 00:19:31.650 "trid": { 00:19:31.650 "trtype": "PCIe", 00:19:31.650 "traddr": "0000:00:11.0" 00:19:31.650 }, 00:19:31.650 "ctrlr_data": { 00:19:31.650 "cntlid": 0, 00:19:31.650 "vendor_id": "0x1b36", 00:19:31.650 "model_number": "QEMU NVMe Ctrl", 00:19:31.650 "serial_number": "12341", 00:19:31.650 "firmware_revision": "8.0.0", 00:19:31.650 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:31.650 "oacs": { 00:19:31.650 "security": 0, 00:19:31.650 "format": 1, 00:19:31.650 "firmware": 0, 00:19:31.650 "ns_manage": 1 00:19:31.650 }, 00:19:31.650 "multi_ctrlr": false, 00:19:31.650 "ana_reporting": false 00:19:31.650 }, 00:19:31.650 "vs": { 00:19:31.650 "nvme_version": "1.4" 00:19:31.650 }, 00:19:31.650 "ns_data": { 00:19:31.650 "id": 1, 00:19:31.650 "can_share": false 00:19:31.650 } 00:19:31.650 } 00:19:31.650 ], 00:19:31.650 "mp_policy": "active_passive" 00:19:31.650 } 00:19:31.650 } 00:19:31.650 ]' 00:19:31.650 17:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:31.650 17:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:31.650 17:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:31.650 17:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:31.650 17:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:31.650 17:06:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:19:31.650 17:06:05 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:31.650 17:06:05 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:31.650 17:06:05 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:31.650 17:06:05 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:31.650 17:06:05 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:31.910 17:06:06 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=0422f55c-0785-4a1b-b23c-8810a5a64105 00:19:31.910 17:06:06 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:31.910 17:06:06 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0422f55c-0785-4a1b-b23c-8810a5a64105 00:19:32.170 17:06:06 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:32.430 17:06:06 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=b235eeb9-51e0-41dd-b9ae-1e4b6bd9c276 00:19:32.430 17:06:06 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b235eeb9-51e0-41dd-b9ae-1e4b6bd9c276 00:19:32.690 17:06:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=fe279f2a-70d1-484f-914e-89dee640c849 00:19:32.690 17:06:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 fe279f2a-70d1-484f-914e-89dee640c849 00:19:32.690 17:06:06 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:32.690 17:06:06 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:32.690 17:06:06 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=fe279f2a-70d1-484f-914e-89dee640c849 00:19:32.690 17:06:06 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:32.690 17:06:06 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size fe279f2a-70d1-484f-914e-89dee640c849 00:19:32.690 17:06:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=fe279f2a-70d1-484f-914e-89dee640c849 00:19:32.690 17:06:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:32.690 17:06:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:32.690 17:06:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:32.690 17:06:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fe279f2a-70d1-484f-914e-89dee640c849 00:19:32.690 17:06:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:32.690 { 00:19:32.690 "name": "fe279f2a-70d1-484f-914e-89dee640c849", 00:19:32.690 "aliases": [ 00:19:32.690 "lvs/nvme0n1p0" 00:19:32.690 ], 00:19:32.690 "product_name": "Logical Volume", 00:19:32.690 "block_size": 4096, 00:19:32.690 "num_blocks": 26476544, 00:19:32.690 "uuid": "fe279f2a-70d1-484f-914e-89dee640c849", 00:19:32.690 "assigned_rate_limits": { 00:19:32.690 "rw_ios_per_sec": 0, 00:19:32.690 "rw_mbytes_per_sec": 0, 00:19:32.690 "r_mbytes_per_sec": 0, 00:19:32.690 "w_mbytes_per_sec": 0 00:19:32.690 }, 00:19:32.690 "claimed": false, 00:19:32.690 "zoned": false, 00:19:32.690 "supported_io_types": { 00:19:32.690 "read": true, 00:19:32.690 "write": true, 00:19:32.690 "unmap": true, 00:19:32.690 "flush": false, 00:19:32.690 "reset": true, 00:19:32.690 "nvme_admin": false, 00:19:32.690 "nvme_io": false, 00:19:32.690 "nvme_io_md": false, 00:19:32.690 "write_zeroes": true, 00:19:32.690 "zcopy": false, 00:19:32.690 "get_zone_info": false, 00:19:32.690 "zone_management": false, 00:19:32.690 "zone_append": false, 00:19:32.690 "compare": false, 00:19:32.690 "compare_and_write": false, 00:19:32.690 "abort": false, 00:19:32.690 "seek_hole": true, 00:19:32.690 "seek_data": true, 00:19:32.690 "copy": false, 00:19:32.690 "nvme_iov_md": false 00:19:32.690 }, 00:19:32.690 "driver_specific": { 00:19:32.690 "lvol": { 00:19:32.690 "lvol_store_uuid": "b235eeb9-51e0-41dd-b9ae-1e4b6bd9c276", 00:19:32.690 "base_bdev": "nvme0n1", 00:19:32.690 "thin_provision": true, 00:19:32.690 "num_allocated_clusters": 0, 00:19:32.690 "snapshot": false, 00:19:32.690 "clone": false, 00:19:32.690 "esnap_clone": false 00:19:32.690 } 00:19:32.690 } 00:19:32.690 } 00:19:32.690 ]' 00:19:32.690 17:06:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:32.690 17:06:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:32.690 17:06:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:32.952 17:06:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:32.952 17:06:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:32.952 17:06:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:32.952 17:06:07 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:32.952 17:06:07 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:32.952 17:06:07 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:33.213 17:06:07 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:33.213 17:06:07 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:33.213 17:06:07 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size fe279f2a-70d1-484f-914e-89dee640c849 00:19:33.213 17:06:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=fe279f2a-70d1-484f-914e-89dee640c849 00:19:33.213 17:06:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:33.213 17:06:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:33.213 17:06:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:33.213 17:06:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fe279f2a-70d1-484f-914e-89dee640c849 00:19:33.213 17:06:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:33.213 { 00:19:33.213 "name": "fe279f2a-70d1-484f-914e-89dee640c849", 00:19:33.213 "aliases": [ 00:19:33.213 "lvs/nvme0n1p0" 00:19:33.213 ], 00:19:33.213 "product_name": "Logical Volume", 00:19:33.213 "block_size": 4096, 00:19:33.213 "num_blocks": 26476544, 00:19:33.213 "uuid": "fe279f2a-70d1-484f-914e-89dee640c849", 00:19:33.213 "assigned_rate_limits": { 00:19:33.213 "rw_ios_per_sec": 0, 00:19:33.213 "rw_mbytes_per_sec": 0, 00:19:33.213 "r_mbytes_per_sec": 0, 00:19:33.213 "w_mbytes_per_sec": 0 00:19:33.213 }, 00:19:33.213 "claimed": false, 00:19:33.213 "zoned": false, 00:19:33.213 "supported_io_types": { 00:19:33.213 "read": true, 00:19:33.213 "write": true, 00:19:33.213 "unmap": true, 00:19:33.213 "flush": false, 00:19:33.213 "reset": true, 00:19:33.213 "nvme_admin": false, 00:19:33.213 "nvme_io": false, 00:19:33.213 "nvme_io_md": false, 00:19:33.213 "write_zeroes": true, 00:19:33.213 "zcopy": false, 00:19:33.213 "get_zone_info": false, 00:19:33.213 "zone_management": false, 00:19:33.213 "zone_append": false, 00:19:33.213 "compare": false, 00:19:33.213 "compare_and_write": false, 00:19:33.213 "abort": false, 00:19:33.213 "seek_hole": true, 00:19:33.213 "seek_data": true, 00:19:33.213 "copy": false, 00:19:33.213 "nvme_iov_md": false 00:19:33.213 }, 00:19:33.213 "driver_specific": { 00:19:33.213 "lvol": { 00:19:33.213 "lvol_store_uuid": "b235eeb9-51e0-41dd-b9ae-1e4b6bd9c276", 00:19:33.213 "base_bdev": "nvme0n1", 00:19:33.213 "thin_provision": true, 00:19:33.213 "num_allocated_clusters": 0, 00:19:33.213 "snapshot": false, 00:19:33.213 "clone": false, 00:19:33.213 "esnap_clone": false 00:19:33.213 } 00:19:33.213 } 00:19:33.213 } 00:19:33.213 ]' 00:19:33.213 17:06:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:33.213 17:06:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:33.213 17:06:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:33.473 17:06:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:33.473 17:06:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:33.473 17:06:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:33.473 17:06:07 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:33.473 17:06:07 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:33.473 17:06:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:33.473 17:06:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size fe279f2a-70d1-484f-914e-89dee640c849 00:19:33.473 17:06:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=fe279f2a-70d1-484f-914e-89dee640c849 00:19:33.473 17:06:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:33.473 17:06:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:33.473 17:06:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:33.473 17:06:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fe279f2a-70d1-484f-914e-89dee640c849 00:19:33.733 17:06:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:33.733 { 00:19:33.733 "name": "fe279f2a-70d1-484f-914e-89dee640c849", 00:19:33.733 "aliases": [ 00:19:33.733 "lvs/nvme0n1p0" 00:19:33.733 ], 00:19:33.733 "product_name": "Logical Volume", 00:19:33.733 "block_size": 4096, 00:19:33.733 "num_blocks": 26476544, 00:19:33.733 "uuid": "fe279f2a-70d1-484f-914e-89dee640c849", 00:19:33.733 "assigned_rate_limits": { 00:19:33.733 "rw_ios_per_sec": 0, 00:19:33.733 "rw_mbytes_per_sec": 0, 00:19:33.733 "r_mbytes_per_sec": 0, 00:19:33.733 "w_mbytes_per_sec": 0 00:19:33.733 }, 00:19:33.733 "claimed": false, 00:19:33.733 "zoned": false, 00:19:33.733 "supported_io_types": { 00:19:33.733 "read": true, 00:19:33.733 "write": true, 00:19:33.733 "unmap": true, 00:19:33.733 "flush": false, 00:19:33.733 "reset": true, 00:19:33.733 "nvme_admin": false, 00:19:33.733 "nvme_io": false, 00:19:33.733 "nvme_io_md": false, 00:19:33.733 "write_zeroes": true, 00:19:33.733 "zcopy": false, 00:19:33.733 "get_zone_info": false, 00:19:33.733 "zone_management": false, 00:19:33.733 "zone_append": false, 00:19:33.733 "compare": false, 00:19:33.733 "compare_and_write": false, 00:19:33.733 "abort": false, 00:19:33.733 "seek_hole": true, 00:19:33.733 "seek_data": true, 00:19:33.733 "copy": false, 00:19:33.733 "nvme_iov_md": false 00:19:33.733 }, 00:19:33.733 "driver_specific": { 00:19:33.733 "lvol": { 00:19:33.733 "lvol_store_uuid": "b235eeb9-51e0-41dd-b9ae-1e4b6bd9c276", 00:19:33.733 "base_bdev": "nvme0n1", 00:19:33.733 "thin_provision": true, 00:19:33.733 "num_allocated_clusters": 0, 00:19:33.733 "snapshot": false, 00:19:33.733 "clone": false, 00:19:33.733 "esnap_clone": false 00:19:33.733 } 00:19:33.733 } 00:19:33.733 } 00:19:33.733 ]' 00:19:33.733 17:06:07 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:33.733 17:06:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:33.733 17:06:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:33.733 17:06:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:33.733 17:06:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:33.733 17:06:08 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:33.733 17:06:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:33.733 17:06:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d fe279f2a-70d1-484f-914e-89dee640c849 -c nvc0n1p0 --l2p_dram_limit 20 00:19:33.994 [2024-12-05 17:06:08.228749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.994 [2024-12-05 17:06:08.228864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:33.994 [2024-12-05 17:06:08.228880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:33.994 [2024-12-05 17:06:08.228890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.994 [2024-12-05 17:06:08.228936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.994 [2024-12-05 17:06:08.228945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:33.994 [2024-12-05 17:06:08.228969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:33.994 [2024-12-05 17:06:08.228977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.994 [2024-12-05 17:06:08.228991] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:33.994 [2024-12-05 17:06:08.229514] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:33.994 [2024-12-05 17:06:08.229525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.994 [2024-12-05 17:06:08.229532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:33.994 [2024-12-05 17:06:08.229539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:19:33.994 [2024-12-05 17:06:08.229546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.994 [2024-12-05 17:06:08.229591] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6890dc3b-1b60-478a-9838-26e3ed005d1c 00:19:33.994 [2024-12-05 17:06:08.230512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.994 [2024-12-05 17:06:08.230528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:33.994 [2024-12-05 17:06:08.230539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:33.994 [2024-12-05 17:06:08.230545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.994 [2024-12-05 17:06:08.235201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.995 [2024-12-05 17:06:08.235297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:33.995 [2024-12-05 17:06:08.235312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.621 ms 00:19:33.995 [2024-12-05 17:06:08.235320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.995 [2024-12-05 17:06:08.235385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.995 [2024-12-05 17:06:08.235392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:33.995 [2024-12-05 17:06:08.235402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:33.995 [2024-12-05 17:06:08.235408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.995 [2024-12-05 17:06:08.235440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.995 [2024-12-05 17:06:08.235447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:33.995 [2024-12-05 17:06:08.235455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:33.995 [2024-12-05 17:06:08.235460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.995 [2024-12-05 17:06:08.235478] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:33.995 [2024-12-05 17:06:08.238369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.995 [2024-12-05 17:06:08.238462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:33.995 [2024-12-05 17:06:08.238473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.898 ms 00:19:33.995 [2024-12-05 17:06:08.238482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.995 [2024-12-05 17:06:08.238508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.995 [2024-12-05 17:06:08.238517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:33.995 [2024-12-05 17:06:08.238523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:33.995 [2024-12-05 17:06:08.238530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.995 [2024-12-05 17:06:08.238542] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:33.995 [2024-12-05 17:06:08.238652] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:33.995 [2024-12-05 17:06:08.238661] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:33.995 [2024-12-05 17:06:08.238670] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:33.995 [2024-12-05 17:06:08.238678] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:33.995 [2024-12-05 17:06:08.238687] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:33.995 [2024-12-05 17:06:08.238694] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:33.995 [2024-12-05 17:06:08.238701] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:33.995 [2024-12-05 17:06:08.238706] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:33.995 [2024-12-05 17:06:08.238713] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:33.995 [2024-12-05 17:06:08.238720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.995 [2024-12-05 17:06:08.238726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:33.995 [2024-12-05 17:06:08.238732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:19:33.995 [2024-12-05 17:06:08.238739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.995 [2024-12-05 17:06:08.238802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.995 [2024-12-05 17:06:08.238810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:33.995 [2024-12-05 17:06:08.238816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:33.995 [2024-12-05 17:06:08.238825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.995 [2024-12-05 17:06:08.238892] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:33.995 [2024-12-05 17:06:08.238902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:33.995 [2024-12-05 17:06:08.238908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:33.995 [2024-12-05 17:06:08.238915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.995 [2024-12-05 17:06:08.238921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:33.995 [2024-12-05 17:06:08.238927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:33.995 [2024-12-05 17:06:08.238932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:33.995 [2024-12-05 17:06:08.238939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:33.995 [2024-12-05 17:06:08.238944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:33.995 [2024-12-05 17:06:08.238966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:33.995 [2024-12-05 17:06:08.238972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:33.995 [2024-12-05 17:06:08.238984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:33.995 [2024-12-05 17:06:08.238990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:33.995 [2024-12-05 17:06:08.238997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:33.995 [2024-12-05 17:06:08.239002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:33.995 [2024-12-05 17:06:08.239012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.995 [2024-12-05 17:06:08.239017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:33.995 [2024-12-05 17:06:08.239024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:33.995 [2024-12-05 17:06:08.239030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.995 [2024-12-05 17:06:08.239036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:33.995 [2024-12-05 17:06:08.239042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:33.995 [2024-12-05 17:06:08.239047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:33.995 [2024-12-05 17:06:08.239052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:33.995 [2024-12-05 17:06:08.239058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:33.995 [2024-12-05 17:06:08.239063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:33.995 [2024-12-05 17:06:08.239069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:33.995 [2024-12-05 17:06:08.239074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:33.995 [2024-12-05 17:06:08.239080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:33.995 [2024-12-05 17:06:08.239085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:33.995 [2024-12-05 17:06:08.239091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:33.995 [2024-12-05 17:06:08.239096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:33.995 [2024-12-05 17:06:08.239104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:33.995 [2024-12-05 17:06:08.239109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:33.995 [2024-12-05 17:06:08.239116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:33.995 [2024-12-05 17:06:08.239121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:33.995 [2024-12-05 17:06:08.239127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:33.995 [2024-12-05 17:06:08.239131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:33.996 [2024-12-05 17:06:08.239138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:33.996 [2024-12-05 17:06:08.239142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:33.996 [2024-12-05 17:06:08.239148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.996 [2024-12-05 17:06:08.239153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:33.996 [2024-12-05 17:06:08.239159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:33.996 [2024-12-05 17:06:08.239164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.996 [2024-12-05 17:06:08.239170] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:33.996 [2024-12-05 17:06:08.239175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:33.996 [2024-12-05 17:06:08.239182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:33.996 [2024-12-05 17:06:08.239187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:33.996 [2024-12-05 17:06:08.239196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:33.996 [2024-12-05 17:06:08.239201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:33.996 [2024-12-05 17:06:08.239208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:33.996 [2024-12-05 17:06:08.239213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:33.996 [2024-12-05 17:06:08.239219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:33.996 [2024-12-05 17:06:08.239224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:33.996 [2024-12-05 17:06:08.239231] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:33.996 [2024-12-05 17:06:08.239238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:33.996 [2024-12-05 17:06:08.239246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:33.996 [2024-12-05 17:06:08.239251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:33.996 [2024-12-05 17:06:08.239258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:33.996 [2024-12-05 17:06:08.239264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:33.996 [2024-12-05 17:06:08.239270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:33.996 [2024-12-05 17:06:08.239275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:33.996 [2024-12-05 17:06:08.239283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:33.996 [2024-12-05 17:06:08.239288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:33.996 [2024-12-05 17:06:08.239296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:33.996 [2024-12-05 17:06:08.239301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:33.996 [2024-12-05 17:06:08.239308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:33.996 [2024-12-05 17:06:08.239313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:33.996 [2024-12-05 17:06:08.239319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:33.996 [2024-12-05 17:06:08.239325] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:33.996 [2024-12-05 17:06:08.239332] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:33.996 [2024-12-05 17:06:08.239338] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:33.996 [2024-12-05 17:06:08.239347] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:33.996 [2024-12-05 17:06:08.239352] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:33.996 [2024-12-05 17:06:08.239359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:33.996 [2024-12-05 17:06:08.239364] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:33.996 [2024-12-05 17:06:08.239371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:33.996 [2024-12-05 17:06:08.239376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:33.996 [2024-12-05 17:06:08.239383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:19:33.996 [2024-12-05 17:06:08.239389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:33.996 [2024-12-05 17:06:08.239426] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:33.996 [2024-12-05 17:06:08.239434] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:38.203 [2024-12-05 17:06:12.432090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.203 [2024-12-05 17:06:12.432168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:38.203 [2024-12-05 17:06:12.432188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4192.642 ms 00:19:38.203 [2024-12-05 17:06:12.432198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.203 [2024-12-05 17:06:12.464270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.203 [2024-12-05 17:06:12.464330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:38.203 [2024-12-05 17:06:12.464347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.821 ms 00:19:38.203 [2024-12-05 17:06:12.464356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.203 [2024-12-05 17:06:12.464497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.203 [2024-12-05 17:06:12.464508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:38.203 [2024-12-05 17:06:12.464522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:38.203 [2024-12-05 17:06:12.464531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.203 [2024-12-05 17:06:12.509806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.203 [2024-12-05 17:06:12.510067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:38.203 [2024-12-05 17:06:12.510207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.224 ms 00:19:38.203 [2024-12-05 17:06:12.510238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.203 [2024-12-05 17:06:12.510302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.203 [2024-12-05 17:06:12.510327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:38.203 [2024-12-05 17:06:12.510350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:38.203 [2024-12-05 17:06:12.510373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.203 [2024-12-05 17:06:12.510982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.203 [2024-12-05 17:06:12.511056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:38.203 [2024-12-05 17:06:12.511261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:19:38.203 [2024-12-05 17:06:12.511273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.203 [2024-12-05 17:06:12.511403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.203 [2024-12-05 17:06:12.511414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:38.203 [2024-12-05 17:06:12.511427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:19:38.203 [2024-12-05 17:06:12.511435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.203 [2024-12-05 17:06:12.527428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.203 [2024-12-05 17:06:12.527472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:38.203 [2024-12-05 17:06:12.527487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.970 ms 00:19:38.203 [2024-12-05 17:06:12.527505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.203 [2024-12-05 17:06:12.540896] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:38.203 [2024-12-05 17:06:12.548872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.203 [2024-12-05 17:06:12.549084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:38.203 [2024-12-05 17:06:12.549104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.283 ms 00:19:38.203 [2024-12-05 17:06:12.549115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.464 [2024-12-05 17:06:12.647500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.464 [2024-12-05 17:06:12.647562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:38.464 [2024-12-05 17:06:12.647577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.354 ms 00:19:38.464 [2024-12-05 17:06:12.647588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.464 [2024-12-05 17:06:12.647790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.464 [2024-12-05 17:06:12.647806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:38.464 [2024-12-05 17:06:12.647817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:19:38.464 [2024-12-05 17:06:12.647831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.464 [2024-12-05 17:06:12.673671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.464 [2024-12-05 17:06:12.673727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:38.464 [2024-12-05 17:06:12.673741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.788 ms 00:19:38.464 [2024-12-05 17:06:12.673752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.464 [2024-12-05 17:06:12.698706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.464 [2024-12-05 17:06:12.698757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:38.464 [2024-12-05 17:06:12.698771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.905 ms 00:19:38.464 [2024-12-05 17:06:12.698781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.464 [2024-12-05 17:06:12.699403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.465 [2024-12-05 17:06:12.699428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:38.465 [2024-12-05 17:06:12.699438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.575 ms 00:19:38.465 [2024-12-05 17:06:12.699448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.465 [2024-12-05 17:06:12.789429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.465 [2024-12-05 17:06:12.789493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:38.465 [2024-12-05 17:06:12.789507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.940 ms 00:19:38.465 [2024-12-05 17:06:12.789518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.465 [2024-12-05 17:06:12.817264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.465 [2024-12-05 17:06:12.817321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:38.465 [2024-12-05 17:06:12.817339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.651 ms 00:19:38.465 [2024-12-05 17:06:12.817349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.726 [2024-12-05 17:06:12.843829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.726 [2024-12-05 17:06:12.843885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:38.726 [2024-12-05 17:06:12.843898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.430 ms 00:19:38.726 [2024-12-05 17:06:12.843909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.726 [2024-12-05 17:06:12.870286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.726 [2024-12-05 17:06:12.870340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:38.726 [2024-12-05 17:06:12.870353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.310 ms 00:19:38.726 [2024-12-05 17:06:12.870364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.726 [2024-12-05 17:06:12.870415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.726 [2024-12-05 17:06:12.870431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:38.726 [2024-12-05 17:06:12.870441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:38.726 [2024-12-05 17:06:12.870451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.726 [2024-12-05 17:06:12.870540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.726 [2024-12-05 17:06:12.870553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:38.726 [2024-12-05 17:06:12.870562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:38.726 [2024-12-05 17:06:12.870573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.726 [2024-12-05 17:06:12.871746] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4642.492 ms, result 0 00:19:38.726 { 00:19:38.726 "name": "ftl0", 00:19:38.726 "uuid": "6890dc3b-1b60-478a-9838-26e3ed005d1c" 00:19:38.726 } 00:19:38.726 17:06:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:38.726 17:06:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:38.726 17:06:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:38.988 17:06:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:38.988 [2024-12-05 17:06:13.219912] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:38.988 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:38.988 Zero copy mechanism will not be used. 00:19:38.988 Running I/O for 4 seconds... 00:19:40.880 923.00 IOPS, 61.29 MiB/s [2024-12-05T17:06:16.633Z] 912.00 IOPS, 60.56 MiB/s [2024-12-05T17:06:17.577Z] 942.67 IOPS, 62.60 MiB/s [2024-12-05T17:06:17.577Z] 1001.00 IOPS, 66.47 MiB/s 00:19:43.210 Latency(us) 00:19:43.210 [2024-12-05T17:06:17.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.210 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:43.210 ftl0 : 4.00 1000.86 66.46 0.00 0.00 1049.79 264.66 2470.20 00:19:43.210 [2024-12-05T17:06:17.577Z] =================================================================================================================== 00:19:43.210 [2024-12-05T17:06:17.577Z] Total : 1000.86 66.46 0.00 0.00 1049.79 264.66 2470.20 00:19:43.210 { 00:19:43.210 "results": [ 00:19:43.210 { 00:19:43.210 "job": "ftl0", 00:19:43.210 "core_mask": "0x1", 00:19:43.210 "workload": "randwrite", 00:19:43.210 "status": "finished", 00:19:43.210 "queue_depth": 1, 00:19:43.210 "io_size": 69632, 00:19:43.210 "runtime": 4.002558, 00:19:43.210 "iops": 1000.8599500619354, 00:19:43.210 "mibps": 66.4633560588004, 00:19:43.210 "io_failed": 0, 00:19:43.210 "io_timeout": 0, 00:19:43.210 "avg_latency_us": 1049.7887630093321, 00:19:43.210 "min_latency_us": 264.6646153846154, 00:19:43.210 "max_latency_us": 2470.203076923077 00:19:43.210 } 00:19:43.210 ], 00:19:43.210 "core_count": 1 00:19:43.210 } 00:19:43.210 [2024-12-05 17:06:17.231172] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:43.210 17:06:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:43.210 [2024-12-05 17:06:17.335722] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:43.210 Running I/O for 4 seconds... 00:19:45.118 8457.00 IOPS, 33.04 MiB/s [2024-12-05T17:06:20.505Z] 7125.50 IOPS, 27.83 MiB/s [2024-12-05T17:06:21.450Z] 6771.33 IOPS, 26.45 MiB/s [2024-12-05T17:06:21.450Z] 6687.75 IOPS, 26.12 MiB/s 00:19:47.083 Latency(us) 00:19:47.083 [2024-12-05T17:06:21.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.083 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:47.083 ftl0 : 4.03 6663.90 26.03 0.00 0.00 19131.65 256.79 45976.02 00:19:47.083 [2024-12-05T17:06:21.450Z] =================================================================================================================== 00:19:47.083 [2024-12-05T17:06:21.450Z] Total : 6663.90 26.03 0.00 0.00 19131.65 0.00 45976.02 00:19:47.083 [2024-12-05 17:06:21.375718] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:47.083 { 00:19:47.083 "results": [ 00:19:47.083 { 00:19:47.083 "job": "ftl0", 00:19:47.083 "core_mask": "0x1", 00:19:47.083 "workload": "randwrite", 00:19:47.083 "status": "finished", 00:19:47.083 "queue_depth": 128, 00:19:47.083 "io_size": 4096, 00:19:47.083 "runtime": 4.032326, 00:19:47.083 "iops": 6663.8957266847965, 00:19:47.083 "mibps": 26.030842682362486, 00:19:47.083 "io_failed": 0, 00:19:47.083 "io_timeout": 0, 00:19:47.083 "avg_latency_us": 19131.65392212938, 00:19:47.083 "min_latency_us": 256.7876923076923, 00:19:47.083 "max_latency_us": 45976.02461538462 00:19:47.083 } 00:19:47.083 ], 00:19:47.083 "core_count": 1 00:19:47.083 } 00:19:47.083 17:06:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:47.345 [2024-12-05 17:06:21.501998] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:47.345 Running I/O for 4 seconds... 00:19:49.237 5405.00 IOPS, 21.11 MiB/s [2024-12-05T17:06:24.548Z] 5128.50 IOPS, 20.03 MiB/s [2024-12-05T17:06:25.935Z] 5372.67 IOPS, 20.99 MiB/s [2024-12-05T17:06:25.935Z] 5207.00 IOPS, 20.34 MiB/s 00:19:51.568 Latency(us) 00:19:51.568 [2024-12-05T17:06:25.935Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.568 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:51.568 Verification LBA range: start 0x0 length 0x1400000 00:19:51.568 ftl0 : 4.02 5216.89 20.38 0.00 0.00 24453.46 230.01 37305.11 00:19:51.568 [2024-12-05T17:06:25.935Z] =================================================================================================================== 00:19:51.568 [2024-12-05T17:06:25.935Z] Total : 5216.89 20.38 0.00 0.00 24453.46 0.00 37305.11 00:19:51.568 [2024-12-05 17:06:25.535758] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:51.568 { 00:19:51.568 "results": [ 00:19:51.568 { 00:19:51.568 "job": "ftl0", 00:19:51.568 "core_mask": "0x1", 00:19:51.568 "workload": "verify", 00:19:51.568 "status": "finished", 00:19:51.568 "verify_range": { 00:19:51.568 "start": 0, 00:19:51.568 "length": 20971520 00:19:51.568 }, 00:19:51.568 "queue_depth": 128, 00:19:51.568 "io_size": 4096, 00:19:51.568 "runtime": 4.01695, 00:19:51.568 "iops": 5216.893414157507, 00:19:51.568 "mibps": 20.378489899052763, 00:19:51.568 "io_failed": 0, 00:19:51.568 "io_timeout": 0, 00:19:51.568 "avg_latency_us": 24453.455848003876, 00:19:51.568 "min_latency_us": 230.00615384615384, 00:19:51.568 "max_latency_us": 37305.10769230769 00:19:51.568 } 00:19:51.568 ], 00:19:51.568 "core_count": 1 00:19:51.568 } 00:19:51.568 17:06:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:51.568 [2024-12-05 17:06:25.755245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.568 [2024-12-05 17:06:25.755467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:51.568 [2024-12-05 17:06:25.755491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:51.568 [2024-12-05 17:06:25.755503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.568 [2024-12-05 17:06:25.755535] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:51.568 [2024-12-05 17:06:25.758608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.568 [2024-12-05 17:06:25.758795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:51.568 [2024-12-05 17:06:25.758822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.050 ms 00:19:51.568 [2024-12-05 17:06:25.758831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.568 [2024-12-05 17:06:25.800144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.568 [2024-12-05 17:06:25.800206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:51.568 [2024-12-05 17:06:25.800231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.275 ms 00:19:51.568 [2024-12-05 17:06:25.800240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.829 [2024-12-05 17:06:26.025530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.829 [2024-12-05 17:06:26.025743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:51.829 [2024-12-05 17:06:26.025776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 225.263 ms 00:19:51.829 [2024-12-05 17:06:26.025785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.829 [2024-12-05 17:06:26.032059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.829 [2024-12-05 17:06:26.032241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:51.829 [2024-12-05 17:06:26.032270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.226 ms 00:19:51.830 [2024-12-05 17:06:26.032283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.830 [2024-12-05 17:06:26.059159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.830 [2024-12-05 17:06:26.059368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:51.830 [2024-12-05 17:06:26.059398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.794 ms 00:19:51.830 [2024-12-05 17:06:26.059406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.830 [2024-12-05 17:06:26.077962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.830 [2024-12-05 17:06:26.078170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:51.830 [2024-12-05 17:06:26.078199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.398 ms 00:19:51.830 [2024-12-05 17:06:26.078208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.830 [2024-12-05 17:06:26.078453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.830 [2024-12-05 17:06:26.078484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:51.830 [2024-12-05 17:06:26.078501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:19:51.830 [2024-12-05 17:06:26.078509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.830 [2024-12-05 17:06:26.105347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.830 [2024-12-05 17:06:26.105556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:51.830 [2024-12-05 17:06:26.105582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.815 ms 00:19:51.830 [2024-12-05 17:06:26.105590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.830 [2024-12-05 17:06:26.132986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.830 [2024-12-05 17:06:26.133050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:51.830 [2024-12-05 17:06:26.133073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.277 ms 00:19:51.830 [2024-12-05 17:06:26.133084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.830 [2024-12-05 17:06:26.158392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.830 [2024-12-05 17:06:26.158443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:51.830 [2024-12-05 17:06:26.158458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.238 ms 00:19:51.830 [2024-12-05 17:06:26.158465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.830 [2024-12-05 17:06:26.184009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.830 [2024-12-05 17:06:26.184216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:51.830 [2024-12-05 17:06:26.184247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.443 ms 00:19:51.830 [2024-12-05 17:06:26.184254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:51.830 [2024-12-05 17:06:26.184322] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:51.830 [2024-12-05 17:06:26.184340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:51.830 [2024-12-05 17:06:26.184979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.184989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.184997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:51.831 [2024-12-05 17:06:26.185310] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:51.831 [2024-12-05 17:06:26.185320] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6890dc3b-1b60-478a-9838-26e3ed005d1c 00:19:51.831 [2024-12-05 17:06:26.185332] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:51.831 [2024-12-05 17:06:26.185341] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:51.831 [2024-12-05 17:06:26.185349] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:51.831 [2024-12-05 17:06:26.185359] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:51.831 [2024-12-05 17:06:26.185366] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:51.831 [2024-12-05 17:06:26.185376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:51.831 [2024-12-05 17:06:26.185384] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:51.831 [2024-12-05 17:06:26.185395] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:51.831 [2024-12-05 17:06:26.185402] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:51.831 [2024-12-05 17:06:26.185411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:51.831 [2024-12-05 17:06:26.185419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:51.831 [2024-12-05 17:06:26.185430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.092 ms 00:19:51.831 [2024-12-05 17:06:26.185438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.092 [2024-12-05 17:06:26.199291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.092 [2024-12-05 17:06:26.199486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:52.092 [2024-12-05 17:06:26.199511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.788 ms 00:19:52.092 [2024-12-05 17:06:26.199519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.092 [2024-12-05 17:06:26.199910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.092 [2024-12-05 17:06:26.199920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:52.092 [2024-12-05 17:06:26.199932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:19:52.092 [2024-12-05 17:06:26.199940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.092 [2024-12-05 17:06:26.239668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.092 [2024-12-05 17:06:26.239720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:52.092 [2024-12-05 17:06:26.239738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.092 [2024-12-05 17:06:26.239747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.092 [2024-12-05 17:06:26.239818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.092 [2024-12-05 17:06:26.239826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:52.092 [2024-12-05 17:06:26.239836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.092 [2024-12-05 17:06:26.239844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.092 [2024-12-05 17:06:26.239938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.092 [2024-12-05 17:06:26.239972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:52.092 [2024-12-05 17:06:26.239983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.092 [2024-12-05 17:06:26.239991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.092 [2024-12-05 17:06:26.240009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.092 [2024-12-05 17:06:26.240018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:52.092 [2024-12-05 17:06:26.240028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.092 [2024-12-05 17:06:26.240036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.092 [2024-12-05 17:06:26.324510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.092 [2024-12-05 17:06:26.324573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:52.092 [2024-12-05 17:06:26.324593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.092 [2024-12-05 17:06:26.324601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.092 [2024-12-05 17:06:26.393743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.092 [2024-12-05 17:06:26.393804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:52.092 [2024-12-05 17:06:26.393819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.092 [2024-12-05 17:06:26.393828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.092 [2024-12-05 17:06:26.393941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.092 [2024-12-05 17:06:26.393983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:52.092 [2024-12-05 17:06:26.393997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.092 [2024-12-05 17:06:26.394006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.092 [2024-12-05 17:06:26.394057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.092 [2024-12-05 17:06:26.394068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:52.092 [2024-12-05 17:06:26.394078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.092 [2024-12-05 17:06:26.394087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.092 [2024-12-05 17:06:26.394190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.092 [2024-12-05 17:06:26.394203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:52.092 [2024-12-05 17:06:26.394216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.092 [2024-12-05 17:06:26.394224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.092 [2024-12-05 17:06:26.394260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.092 [2024-12-05 17:06:26.394271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:52.092 [2024-12-05 17:06:26.394283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.092 [2024-12-05 17:06:26.394291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.092 [2024-12-05 17:06:26.394333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.092 [2024-12-05 17:06:26.394345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:52.092 [2024-12-05 17:06:26.394356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.092 [2024-12-05 17:06:26.394371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.092 [2024-12-05 17:06:26.394421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:52.092 [2024-12-05 17:06:26.394431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:52.092 [2024-12-05 17:06:26.394443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:52.092 [2024-12-05 17:06:26.394451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.092 [2024-12-05 17:06:26.394598] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 639.305 ms, result 0 00:19:52.092 true 00:19:52.092 17:06:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 75925 00:19:52.092 17:06:26 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 75925 ']' 00:19:52.092 17:06:26 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 75925 00:19:52.092 17:06:26 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:52.092 17:06:26 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:52.092 17:06:26 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75925 00:19:52.092 killing process with pid 75925 00:19:52.092 Received shutdown signal, test time was about 4.000000 seconds 00:19:52.092 00:19:52.092 Latency(us) 00:19:52.092 [2024-12-05T17:06:26.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.092 [2024-12-05T17:06:26.459Z] =================================================================================================================== 00:19:52.092 [2024-12-05T17:06:26.459Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:52.092 17:06:26 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:52.092 17:06:26 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:52.092 17:06:26 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75925' 00:19:52.092 17:06:26 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 75925 00:19:52.092 17:06:26 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 75925 00:19:54.010 Remove shared memory files 00:19:54.010 17:06:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:54.010 17:06:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:54.010 17:06:27 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:54.010 17:06:27 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:54.010 17:06:27 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:54.010 17:06:27 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:54.010 17:06:27 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:54.010 17:06:27 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:54.010 ************************************ 00:19:54.010 END TEST ftl_bdevperf 00:19:54.010 ************************************ 00:19:54.010 00:19:54.010 real 0m23.771s 00:19:54.010 user 0m26.351s 00:19:54.010 sys 0m1.020s 00:19:54.010 17:06:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:54.010 17:06:27 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:54.010 17:06:27 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:54.010 17:06:27 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:54.010 17:06:27 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:54.010 17:06:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:54.010 ************************************ 00:19:54.010 START TEST ftl_trim 00:19:54.010 ************************************ 00:19:54.010 17:06:28 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:54.010 * Looking for test storage... 00:19:54.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:54.010 17:06:28 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:54.010 17:06:28 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:19:54.010 17:06:28 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:54.010 17:06:28 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:54.010 17:06:28 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:54.010 17:06:28 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:54.010 17:06:28 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:54.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.010 --rc genhtml_branch_coverage=1 00:19:54.010 --rc genhtml_function_coverage=1 00:19:54.010 --rc genhtml_legend=1 00:19:54.010 --rc geninfo_all_blocks=1 00:19:54.010 --rc geninfo_unexecuted_blocks=1 00:19:54.010 00:19:54.010 ' 00:19:54.010 17:06:28 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:54.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.010 --rc genhtml_branch_coverage=1 00:19:54.010 --rc genhtml_function_coverage=1 00:19:54.010 --rc genhtml_legend=1 00:19:54.010 --rc geninfo_all_blocks=1 00:19:54.010 --rc geninfo_unexecuted_blocks=1 00:19:54.010 00:19:54.010 ' 00:19:54.010 17:06:28 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:54.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.010 --rc genhtml_branch_coverage=1 00:19:54.010 --rc genhtml_function_coverage=1 00:19:54.010 --rc genhtml_legend=1 00:19:54.010 --rc geninfo_all_blocks=1 00:19:54.010 --rc geninfo_unexecuted_blocks=1 00:19:54.010 00:19:54.010 ' 00:19:54.010 17:06:28 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:54.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.010 --rc genhtml_branch_coverage=1 00:19:54.010 --rc genhtml_function_coverage=1 00:19:54.010 --rc genhtml_legend=1 00:19:54.010 --rc geninfo_all_blocks=1 00:19:54.010 --rc geninfo_unexecuted_blocks=1 00:19:54.010 00:19:54.010 ' 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:54.010 17:06:28 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:54.011 17:06:28 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:54.011 17:06:28 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:54.011 17:06:28 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:54.011 17:06:28 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:54.011 17:06:28 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:54.011 17:06:28 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:54.011 17:06:28 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:54.011 17:06:28 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76286 00:19:54.011 17:06:28 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76286 00:19:54.011 17:06:28 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:54.011 17:06:28 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76286 ']' 00:19:54.011 17:06:28 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.011 17:06:28 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.011 17:06:28 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.011 17:06:28 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.011 17:06:28 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:54.011 [2024-12-05 17:06:28.293580] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:19:54.011 [2024-12-05 17:06:28.293940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76286 ] 00:19:54.272 [2024-12-05 17:06:28.459099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:54.272 [2024-12-05 17:06:28.586036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.272 [2024-12-05 17:06:28.586221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.272 [2024-12-05 17:06:28.586311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:55.217 17:06:29 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.217 17:06:29 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:55.217 17:06:29 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:55.217 17:06:29 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:55.217 17:06:29 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:55.217 17:06:29 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:55.217 17:06:29 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:55.217 17:06:29 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:55.479 17:06:29 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:55.479 17:06:29 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:55.479 17:06:29 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:55.479 17:06:29 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:55.479 17:06:29 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:55.479 17:06:29 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:55.479 17:06:29 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:55.479 17:06:29 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:55.479 17:06:29 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:55.479 { 00:19:55.479 "name": "nvme0n1", 00:19:55.479 "aliases": [ 00:19:55.479 "60f3c989-dfb2-4124-b064-5fd70ed94075" 00:19:55.479 ], 00:19:55.479 "product_name": "NVMe disk", 00:19:55.479 "block_size": 4096, 00:19:55.479 "num_blocks": 1310720, 00:19:55.479 "uuid": "60f3c989-dfb2-4124-b064-5fd70ed94075", 00:19:55.479 "numa_id": -1, 00:19:55.479 "assigned_rate_limits": { 00:19:55.479 "rw_ios_per_sec": 0, 00:19:55.479 "rw_mbytes_per_sec": 0, 00:19:55.479 "r_mbytes_per_sec": 0, 00:19:55.479 "w_mbytes_per_sec": 0 00:19:55.479 }, 00:19:55.479 "claimed": true, 00:19:55.479 "claim_type": "read_many_write_one", 00:19:55.479 "zoned": false, 00:19:55.479 "supported_io_types": { 00:19:55.479 "read": true, 00:19:55.479 "write": true, 00:19:55.479 "unmap": true, 00:19:55.479 "flush": true, 00:19:55.479 "reset": true, 00:19:55.479 "nvme_admin": true, 00:19:55.479 "nvme_io": true, 00:19:55.479 "nvme_io_md": false, 00:19:55.479 "write_zeroes": true, 00:19:55.479 "zcopy": false, 00:19:55.479 "get_zone_info": false, 00:19:55.479 "zone_management": false, 00:19:55.479 "zone_append": false, 00:19:55.479 "compare": true, 00:19:55.479 "compare_and_write": false, 00:19:55.479 "abort": true, 00:19:55.479 "seek_hole": false, 00:19:55.479 "seek_data": false, 00:19:55.479 "copy": true, 00:19:55.479 "nvme_iov_md": false 00:19:55.479 }, 00:19:55.479 "driver_specific": { 00:19:55.479 "nvme": [ 00:19:55.479 { 00:19:55.479 "pci_address": "0000:00:11.0", 00:19:55.479 "trid": { 00:19:55.479 "trtype": "PCIe", 00:19:55.479 "traddr": "0000:00:11.0" 00:19:55.479 }, 00:19:55.479 "ctrlr_data": { 00:19:55.479 "cntlid": 0, 00:19:55.479 "vendor_id": "0x1b36", 00:19:55.479 "model_number": "QEMU NVMe Ctrl", 00:19:55.479 "serial_number": "12341", 00:19:55.479 "firmware_revision": "8.0.0", 00:19:55.479 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:55.479 "oacs": { 00:19:55.479 "security": 0, 00:19:55.479 "format": 1, 00:19:55.479 "firmware": 0, 00:19:55.479 "ns_manage": 1 00:19:55.479 }, 00:19:55.479 "multi_ctrlr": false, 00:19:55.479 "ana_reporting": false 00:19:55.479 }, 00:19:55.479 "vs": { 00:19:55.479 "nvme_version": "1.4" 00:19:55.479 }, 00:19:55.479 "ns_data": { 00:19:55.479 "id": 1, 00:19:55.479 "can_share": false 00:19:55.479 } 00:19:55.479 } 00:19:55.479 ], 00:19:55.479 "mp_policy": "active_passive" 00:19:55.479 } 00:19:55.479 } 00:19:55.479 ]' 00:19:55.479 17:06:29 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:55.741 17:06:29 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:55.741 17:06:29 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:55.741 17:06:29 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:55.741 17:06:29 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:55.741 17:06:29 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:55.741 17:06:29 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:55.741 17:06:29 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:55.741 17:06:29 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:55.741 17:06:29 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:55.741 17:06:29 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:56.002 17:06:30 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=b235eeb9-51e0-41dd-b9ae-1e4b6bd9c276 00:19:56.002 17:06:30 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:56.002 17:06:30 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b235eeb9-51e0-41dd-b9ae-1e4b6bd9c276 00:19:56.002 17:06:30 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:56.264 17:06:30 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=80b2ddbb-6f95-4321-963e-14bcda1ad520 00:19:56.264 17:06:30 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 80b2ddbb-6f95-4321-963e-14bcda1ad520 00:19:56.523 17:06:30 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=db768b98-f9bf-442c-aeb5-1dc20a9f3fbe 00:19:56.523 17:06:30 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 db768b98-f9bf-442c-aeb5-1dc20a9f3fbe 00:19:56.523 17:06:30 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:56.523 17:06:30 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:56.523 17:06:30 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=db768b98-f9bf-442c-aeb5-1dc20a9f3fbe 00:19:56.523 17:06:30 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:56.523 17:06:30 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size db768b98-f9bf-442c-aeb5-1dc20a9f3fbe 00:19:56.523 17:06:30 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=db768b98-f9bf-442c-aeb5-1dc20a9f3fbe 00:19:56.523 17:06:30 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:56.523 17:06:30 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:56.523 17:06:30 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:56.523 17:06:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b db768b98-f9bf-442c-aeb5-1dc20a9f3fbe 00:19:56.780 17:06:31 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:56.780 { 00:19:56.780 "name": "db768b98-f9bf-442c-aeb5-1dc20a9f3fbe", 00:19:56.780 "aliases": [ 00:19:56.780 "lvs/nvme0n1p0" 00:19:56.780 ], 00:19:56.780 "product_name": "Logical Volume", 00:19:56.780 "block_size": 4096, 00:19:56.780 "num_blocks": 26476544, 00:19:56.780 "uuid": "db768b98-f9bf-442c-aeb5-1dc20a9f3fbe", 00:19:56.780 "assigned_rate_limits": { 00:19:56.780 "rw_ios_per_sec": 0, 00:19:56.780 "rw_mbytes_per_sec": 0, 00:19:56.780 "r_mbytes_per_sec": 0, 00:19:56.780 "w_mbytes_per_sec": 0 00:19:56.780 }, 00:19:56.780 "claimed": false, 00:19:56.780 "zoned": false, 00:19:56.780 "supported_io_types": { 00:19:56.780 "read": true, 00:19:56.780 "write": true, 00:19:56.780 "unmap": true, 00:19:56.780 "flush": false, 00:19:56.780 "reset": true, 00:19:56.780 "nvme_admin": false, 00:19:56.780 "nvme_io": false, 00:19:56.780 "nvme_io_md": false, 00:19:56.780 "write_zeroes": true, 00:19:56.780 "zcopy": false, 00:19:56.780 "get_zone_info": false, 00:19:56.780 "zone_management": false, 00:19:56.780 "zone_append": false, 00:19:56.780 "compare": false, 00:19:56.780 "compare_and_write": false, 00:19:56.780 "abort": false, 00:19:56.780 "seek_hole": true, 00:19:56.780 "seek_data": true, 00:19:56.780 "copy": false, 00:19:56.780 "nvme_iov_md": false 00:19:56.780 }, 00:19:56.780 "driver_specific": { 00:19:56.780 "lvol": { 00:19:56.780 "lvol_store_uuid": "80b2ddbb-6f95-4321-963e-14bcda1ad520", 00:19:56.780 "base_bdev": "nvme0n1", 00:19:56.780 "thin_provision": true, 00:19:56.780 "num_allocated_clusters": 0, 00:19:56.780 "snapshot": false, 00:19:56.780 "clone": false, 00:19:56.780 "esnap_clone": false 00:19:56.780 } 00:19:56.780 } 00:19:56.780 } 00:19:56.780 ]' 00:19:56.780 17:06:31 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:56.780 17:06:31 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:56.780 17:06:31 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:56.780 17:06:31 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:56.780 17:06:31 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:56.780 17:06:31 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:56.780 17:06:31 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:56.780 17:06:31 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:56.780 17:06:31 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:57.037 17:06:31 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:57.037 17:06:31 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:57.037 17:06:31 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size db768b98-f9bf-442c-aeb5-1dc20a9f3fbe 00:19:57.037 17:06:31 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=db768b98-f9bf-442c-aeb5-1dc20a9f3fbe 00:19:57.037 17:06:31 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:57.037 17:06:31 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:57.037 17:06:31 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:57.037 17:06:31 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b db768b98-f9bf-442c-aeb5-1dc20a9f3fbe 00:19:57.293 17:06:31 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:57.293 { 00:19:57.293 "name": "db768b98-f9bf-442c-aeb5-1dc20a9f3fbe", 00:19:57.293 "aliases": [ 00:19:57.293 "lvs/nvme0n1p0" 00:19:57.293 ], 00:19:57.293 "product_name": "Logical Volume", 00:19:57.293 "block_size": 4096, 00:19:57.293 "num_blocks": 26476544, 00:19:57.293 "uuid": "db768b98-f9bf-442c-aeb5-1dc20a9f3fbe", 00:19:57.293 "assigned_rate_limits": { 00:19:57.293 "rw_ios_per_sec": 0, 00:19:57.293 "rw_mbytes_per_sec": 0, 00:19:57.293 "r_mbytes_per_sec": 0, 00:19:57.293 "w_mbytes_per_sec": 0 00:19:57.293 }, 00:19:57.293 "claimed": false, 00:19:57.293 "zoned": false, 00:19:57.293 "supported_io_types": { 00:19:57.293 "read": true, 00:19:57.293 "write": true, 00:19:57.293 "unmap": true, 00:19:57.294 "flush": false, 00:19:57.294 "reset": true, 00:19:57.294 "nvme_admin": false, 00:19:57.294 "nvme_io": false, 00:19:57.294 "nvme_io_md": false, 00:19:57.294 "write_zeroes": true, 00:19:57.294 "zcopy": false, 00:19:57.294 "get_zone_info": false, 00:19:57.294 "zone_management": false, 00:19:57.294 "zone_append": false, 00:19:57.294 "compare": false, 00:19:57.294 "compare_and_write": false, 00:19:57.294 "abort": false, 00:19:57.294 "seek_hole": true, 00:19:57.294 "seek_data": true, 00:19:57.294 "copy": false, 00:19:57.294 "nvme_iov_md": false 00:19:57.294 }, 00:19:57.294 "driver_specific": { 00:19:57.294 "lvol": { 00:19:57.294 "lvol_store_uuid": "80b2ddbb-6f95-4321-963e-14bcda1ad520", 00:19:57.294 "base_bdev": "nvme0n1", 00:19:57.294 "thin_provision": true, 00:19:57.294 "num_allocated_clusters": 0, 00:19:57.294 "snapshot": false, 00:19:57.294 "clone": false, 00:19:57.294 "esnap_clone": false 00:19:57.294 } 00:19:57.294 } 00:19:57.294 } 00:19:57.294 ]' 00:19:57.294 17:06:31 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:57.294 17:06:31 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:57.294 17:06:31 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:57.294 17:06:31 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:57.294 17:06:31 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:57.294 17:06:31 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:57.294 17:06:31 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:57.294 17:06:31 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:57.551 17:06:31 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:57.551 17:06:31 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:57.551 17:06:31 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size db768b98-f9bf-442c-aeb5-1dc20a9f3fbe 00:19:57.551 17:06:31 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=db768b98-f9bf-442c-aeb5-1dc20a9f3fbe 00:19:57.551 17:06:31 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:57.551 17:06:31 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:57.551 17:06:31 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:57.551 17:06:31 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b db768b98-f9bf-442c-aeb5-1dc20a9f3fbe 00:19:57.808 17:06:32 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:57.808 { 00:19:57.808 "name": "db768b98-f9bf-442c-aeb5-1dc20a9f3fbe", 00:19:57.808 "aliases": [ 00:19:57.808 "lvs/nvme0n1p0" 00:19:57.808 ], 00:19:57.808 "product_name": "Logical Volume", 00:19:57.808 "block_size": 4096, 00:19:57.808 "num_blocks": 26476544, 00:19:57.808 "uuid": "db768b98-f9bf-442c-aeb5-1dc20a9f3fbe", 00:19:57.808 "assigned_rate_limits": { 00:19:57.808 "rw_ios_per_sec": 0, 00:19:57.808 "rw_mbytes_per_sec": 0, 00:19:57.808 "r_mbytes_per_sec": 0, 00:19:57.809 "w_mbytes_per_sec": 0 00:19:57.809 }, 00:19:57.809 "claimed": false, 00:19:57.809 "zoned": false, 00:19:57.809 "supported_io_types": { 00:19:57.809 "read": true, 00:19:57.809 "write": true, 00:19:57.809 "unmap": true, 00:19:57.809 "flush": false, 00:19:57.809 "reset": true, 00:19:57.809 "nvme_admin": false, 00:19:57.809 "nvme_io": false, 00:19:57.809 "nvme_io_md": false, 00:19:57.809 "write_zeroes": true, 00:19:57.809 "zcopy": false, 00:19:57.809 "get_zone_info": false, 00:19:57.809 "zone_management": false, 00:19:57.809 "zone_append": false, 00:19:57.809 "compare": false, 00:19:57.809 "compare_and_write": false, 00:19:57.809 "abort": false, 00:19:57.809 "seek_hole": true, 00:19:57.809 "seek_data": true, 00:19:57.809 "copy": false, 00:19:57.809 "nvme_iov_md": false 00:19:57.809 }, 00:19:57.809 "driver_specific": { 00:19:57.809 "lvol": { 00:19:57.809 "lvol_store_uuid": "80b2ddbb-6f95-4321-963e-14bcda1ad520", 00:19:57.809 "base_bdev": "nvme0n1", 00:19:57.809 "thin_provision": true, 00:19:57.809 "num_allocated_clusters": 0, 00:19:57.809 "snapshot": false, 00:19:57.809 "clone": false, 00:19:57.809 "esnap_clone": false 00:19:57.809 } 00:19:57.809 } 00:19:57.809 } 00:19:57.809 ]' 00:19:57.809 17:06:32 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:57.809 17:06:32 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:57.809 17:06:32 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:57.809 17:06:32 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:57.809 17:06:32 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:57.809 17:06:32 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:57.809 17:06:32 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:57.809 17:06:32 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d db768b98-f9bf-442c-aeb5-1dc20a9f3fbe -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:58.068 [2024-12-05 17:06:32.262468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.068 [2024-12-05 17:06:32.262510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:58.068 [2024-12-05 17:06:32.262523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:58.068 [2024-12-05 17:06:32.262530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.068 [2024-12-05 17:06:32.264783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.068 [2024-12-05 17:06:32.264813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:58.068 [2024-12-05 17:06:32.264822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.225 ms 00:19:58.068 [2024-12-05 17:06:32.264829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.068 [2024-12-05 17:06:32.264907] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:58.068 [2024-12-05 17:06:32.265469] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:58.068 [2024-12-05 17:06:32.265496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.068 [2024-12-05 17:06:32.265503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:58.068 [2024-12-05 17:06:32.265511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:19:58.068 [2024-12-05 17:06:32.265517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.068 [2024-12-05 17:06:32.265606] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c197495b-421a-4479-a175-3609e74ac63a 00:19:58.068 [2024-12-05 17:06:32.266631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.068 [2024-12-05 17:06:32.266737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:58.068 [2024-12-05 17:06:32.266750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:58.068 [2024-12-05 17:06:32.266758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.068 [2024-12-05 17:06:32.272028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.068 [2024-12-05 17:06:32.272054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:58.068 [2024-12-05 17:06:32.272063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.204 ms 00:19:58.068 [2024-12-05 17:06:32.272070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.068 [2024-12-05 17:06:32.272175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.068 [2024-12-05 17:06:32.272185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:58.068 [2024-12-05 17:06:32.272191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:58.068 [2024-12-05 17:06:32.272201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.068 [2024-12-05 17:06:32.272227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.068 [2024-12-05 17:06:32.272234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:58.068 [2024-12-05 17:06:32.272240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:58.068 [2024-12-05 17:06:32.272248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.068 [2024-12-05 17:06:32.272283] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:58.068 [2024-12-05 17:06:32.275213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.068 [2024-12-05 17:06:32.275239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:58.068 [2024-12-05 17:06:32.275249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.932 ms 00:19:58.068 [2024-12-05 17:06:32.275255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.068 [2024-12-05 17:06:32.275307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.068 [2024-12-05 17:06:32.275326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:58.068 [2024-12-05 17:06:32.275333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:58.068 [2024-12-05 17:06:32.275339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.068 [2024-12-05 17:06:32.275367] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:58.068 [2024-12-05 17:06:32.275474] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:58.068 [2024-12-05 17:06:32.275486] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:58.068 [2024-12-05 17:06:32.275494] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:58.068 [2024-12-05 17:06:32.275503] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:58.068 [2024-12-05 17:06:32.275509] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:58.068 [2024-12-05 17:06:32.275516] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:58.068 [2024-12-05 17:06:32.275522] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:58.068 [2024-12-05 17:06:32.275530] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:58.068 [2024-12-05 17:06:32.275537] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:58.068 [2024-12-05 17:06:32.275544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.068 [2024-12-05 17:06:32.275549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:58.068 [2024-12-05 17:06:32.275557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:19:58.068 [2024-12-05 17:06:32.275562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.068 [2024-12-05 17:06:32.275640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.068 [2024-12-05 17:06:32.275647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:58.068 [2024-12-05 17:06:32.275653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:58.068 [2024-12-05 17:06:32.275659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.068 [2024-12-05 17:06:32.275764] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:58.068 [2024-12-05 17:06:32.275771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:58.068 [2024-12-05 17:06:32.275778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:58.068 [2024-12-05 17:06:32.275784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.068 [2024-12-05 17:06:32.275791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:58.068 [2024-12-05 17:06:32.275796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:58.068 [2024-12-05 17:06:32.275802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:58.068 [2024-12-05 17:06:32.275807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:58.068 [2024-12-05 17:06:32.275814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:58.068 [2024-12-05 17:06:32.275819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:58.068 [2024-12-05 17:06:32.275825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:58.068 [2024-12-05 17:06:32.275830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:58.068 [2024-12-05 17:06:32.275837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:58.068 [2024-12-05 17:06:32.275842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:58.068 [2024-12-05 17:06:32.275859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:58.068 [2024-12-05 17:06:32.275865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.068 [2024-12-05 17:06:32.275872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:58.068 [2024-12-05 17:06:32.275877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:58.068 [2024-12-05 17:06:32.275884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.068 [2024-12-05 17:06:32.275890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:58.068 [2024-12-05 17:06:32.275896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:58.068 [2024-12-05 17:06:32.275901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.068 [2024-12-05 17:06:32.275907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:58.068 [2024-12-05 17:06:32.275913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:58.068 [2024-12-05 17:06:32.275919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.068 [2024-12-05 17:06:32.275924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:58.068 [2024-12-05 17:06:32.275931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:58.068 [2024-12-05 17:06:32.275936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.068 [2024-12-05 17:06:32.275943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:58.068 [2024-12-05 17:06:32.275962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:58.068 [2024-12-05 17:06:32.275969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.068 [2024-12-05 17:06:32.275974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:58.068 [2024-12-05 17:06:32.275982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:58.069 [2024-12-05 17:06:32.275988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:58.069 [2024-12-05 17:06:32.275995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:58.069 [2024-12-05 17:06:32.276000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:58.069 [2024-12-05 17:06:32.276007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:58.069 [2024-12-05 17:06:32.276012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:58.069 [2024-12-05 17:06:32.276019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:58.069 [2024-12-05 17:06:32.276024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.069 [2024-12-05 17:06:32.276030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:58.069 [2024-12-05 17:06:32.276036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:58.069 [2024-12-05 17:06:32.276042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.069 [2024-12-05 17:06:32.276047] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:58.069 [2024-12-05 17:06:32.276054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:58.069 [2024-12-05 17:06:32.276059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:58.069 [2024-12-05 17:06:32.276066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.069 [2024-12-05 17:06:32.276072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:58.069 [2024-12-05 17:06:32.276080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:58.069 [2024-12-05 17:06:32.276085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:58.069 [2024-12-05 17:06:32.276092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:58.069 [2024-12-05 17:06:32.276097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:58.069 [2024-12-05 17:06:32.276107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:58.069 [2024-12-05 17:06:32.276113] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:58.069 [2024-12-05 17:06:32.276121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:58.069 [2024-12-05 17:06:32.276129] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:58.069 [2024-12-05 17:06:32.276135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:58.069 [2024-12-05 17:06:32.276141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:58.069 [2024-12-05 17:06:32.276148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:58.069 [2024-12-05 17:06:32.276154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:58.069 [2024-12-05 17:06:32.276160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:58.069 [2024-12-05 17:06:32.276166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:58.069 [2024-12-05 17:06:32.276173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:58.069 [2024-12-05 17:06:32.276178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:58.069 [2024-12-05 17:06:32.276187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:58.069 [2024-12-05 17:06:32.276193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:58.069 [2024-12-05 17:06:32.276199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:58.069 [2024-12-05 17:06:32.276205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:58.069 [2024-12-05 17:06:32.276212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:58.069 [2024-12-05 17:06:32.276217] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:58.069 [2024-12-05 17:06:32.276226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:58.069 [2024-12-05 17:06:32.276233] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:58.069 [2024-12-05 17:06:32.276240] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:58.069 [2024-12-05 17:06:32.276245] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:58.069 [2024-12-05 17:06:32.276252] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:58.069 [2024-12-05 17:06:32.276258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.069 [2024-12-05 17:06:32.276265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:58.069 [2024-12-05 17:06:32.276271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:19:58.069 [2024-12-05 17:06:32.276278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.069 [2024-12-05 17:06:32.276363] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:58.069 [2024-12-05 17:06:32.276380] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:01.355 [2024-12-05 17:06:34.995847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.355 [2024-12-05 17:06:34.995904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:01.355 [2024-12-05 17:06:34.995919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2719.473 ms 00:20:01.355 [2024-12-05 17:06:34.995929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.355 [2024-12-05 17:06:35.021522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.355 [2024-12-05 17:06:35.021568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:01.355 [2024-12-05 17:06:35.021580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.316 ms 00:20:01.355 [2024-12-05 17:06:35.021590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.355 [2024-12-05 17:06:35.021717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.355 [2024-12-05 17:06:35.021729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:01.355 [2024-12-05 17:06:35.021753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:01.355 [2024-12-05 17:06:35.021764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.355 [2024-12-05 17:06:35.063628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.355 [2024-12-05 17:06:35.063673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:01.355 [2024-12-05 17:06:35.063685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.823 ms 00:20:01.355 [2024-12-05 17:06:35.063696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.355 [2024-12-05 17:06:35.063775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.355 [2024-12-05 17:06:35.063789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:01.355 [2024-12-05 17:06:35.063798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:01.355 [2024-12-05 17:06:35.063807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.355 [2024-12-05 17:06:35.064149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.355 [2024-12-05 17:06:35.064170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:01.355 [2024-12-05 17:06:35.064179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:20:01.355 [2024-12-05 17:06:35.064188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.355 [2024-12-05 17:06:35.064295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.355 [2024-12-05 17:06:35.064305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:01.355 [2024-12-05 17:06:35.064326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:01.355 [2024-12-05 17:06:35.064337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.355 [2024-12-05 17:06:35.078757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.355 [2024-12-05 17:06:35.078792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:01.355 [2024-12-05 17:06:35.078802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.387 ms 00:20:01.355 [2024-12-05 17:06:35.078811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.355 [2024-12-05 17:06:35.090062] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:01.355 [2024-12-05 17:06:35.104565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.355 [2024-12-05 17:06:35.104598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:01.355 [2024-12-05 17:06:35.104611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.644 ms 00:20:01.355 [2024-12-05 17:06:35.104619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.355 [2024-12-05 17:06:35.181591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.355 [2024-12-05 17:06:35.181634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:01.355 [2024-12-05 17:06:35.181649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.899 ms 00:20:01.355 [2024-12-05 17:06:35.181657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.355 [2024-12-05 17:06:35.181880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.355 [2024-12-05 17:06:35.181892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:01.355 [2024-12-05 17:06:35.181904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.136 ms 00:20:01.355 [2024-12-05 17:06:35.181912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.355 [2024-12-05 17:06:35.204883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.355 [2024-12-05 17:06:35.204915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:01.355 [2024-12-05 17:06:35.204928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.940 ms 00:20:01.355 [2024-12-05 17:06:35.204936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.355 [2024-12-05 17:06:35.227045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.355 [2024-12-05 17:06:35.227181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:01.355 [2024-12-05 17:06:35.227201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.032 ms 00:20:01.355 [2024-12-05 17:06:35.227208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.355 [2024-12-05 17:06:35.227787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.355 [2024-12-05 17:06:35.227825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:01.355 [2024-12-05 17:06:35.227836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:20:01.355 [2024-12-05 17:06:35.227844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.355 [2024-12-05 17:06:35.300902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.355 [2024-12-05 17:06:35.300937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:01.355 [2024-12-05 17:06:35.300967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.025 ms 00:20:01.355 [2024-12-05 17:06:35.300975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.356 [2024-12-05 17:06:35.325139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.356 [2024-12-05 17:06:35.325171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:01.356 [2024-12-05 17:06:35.325183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.070 ms 00:20:01.356 [2024-12-05 17:06:35.325191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.356 [2024-12-05 17:06:35.347769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.356 [2024-12-05 17:06:35.347800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:01.356 [2024-12-05 17:06:35.347812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.519 ms 00:20:01.356 [2024-12-05 17:06:35.347819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.356 [2024-12-05 17:06:35.370684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.356 [2024-12-05 17:06:35.370822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:01.356 [2024-12-05 17:06:35.370841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.795 ms 00:20:01.356 [2024-12-05 17:06:35.370848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.356 [2024-12-05 17:06:35.370905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.356 [2024-12-05 17:06:35.370916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:01.356 [2024-12-05 17:06:35.370928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:01.356 [2024-12-05 17:06:35.370936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.356 [2024-12-05 17:06:35.371025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.356 [2024-12-05 17:06:35.371035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:01.356 [2024-12-05 17:06:35.371044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:20:01.356 [2024-12-05 17:06:35.371051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.356 [2024-12-05 17:06:35.371806] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:01.356 [2024-12-05 17:06:35.374709] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3109.059 ms, result 0 00:20:01.356 [2024-12-05 17:06:35.375542] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:01.356 { 00:20:01.356 "name": "ftl0", 00:20:01.356 "uuid": "c197495b-421a-4479-a175-3609e74ac63a" 00:20:01.356 } 00:20:01.356 17:06:35 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:20:01.356 17:06:35 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:20:01.356 17:06:35 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:01.356 17:06:35 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:20:01.356 17:06:35 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:01.356 17:06:35 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:01.356 17:06:35 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:01.356 17:06:35 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:20:01.613 [ 00:20:01.613 { 00:20:01.613 "name": "ftl0", 00:20:01.613 "aliases": [ 00:20:01.613 "c197495b-421a-4479-a175-3609e74ac63a" 00:20:01.613 ], 00:20:01.613 "product_name": "FTL disk", 00:20:01.613 "block_size": 4096, 00:20:01.613 "num_blocks": 23592960, 00:20:01.613 "uuid": "c197495b-421a-4479-a175-3609e74ac63a", 00:20:01.613 "assigned_rate_limits": { 00:20:01.613 "rw_ios_per_sec": 0, 00:20:01.613 "rw_mbytes_per_sec": 0, 00:20:01.613 "r_mbytes_per_sec": 0, 00:20:01.613 "w_mbytes_per_sec": 0 00:20:01.613 }, 00:20:01.613 "claimed": false, 00:20:01.613 "zoned": false, 00:20:01.613 "supported_io_types": { 00:20:01.613 "read": true, 00:20:01.613 "write": true, 00:20:01.613 "unmap": true, 00:20:01.613 "flush": true, 00:20:01.613 "reset": false, 00:20:01.613 "nvme_admin": false, 00:20:01.613 "nvme_io": false, 00:20:01.613 "nvme_io_md": false, 00:20:01.613 "write_zeroes": true, 00:20:01.613 "zcopy": false, 00:20:01.613 "get_zone_info": false, 00:20:01.613 "zone_management": false, 00:20:01.613 "zone_append": false, 00:20:01.613 "compare": false, 00:20:01.613 "compare_and_write": false, 00:20:01.613 "abort": false, 00:20:01.613 "seek_hole": false, 00:20:01.613 "seek_data": false, 00:20:01.613 "copy": false, 00:20:01.613 "nvme_iov_md": false 00:20:01.613 }, 00:20:01.613 "driver_specific": { 00:20:01.613 "ftl": { 00:20:01.613 "base_bdev": "db768b98-f9bf-442c-aeb5-1dc20a9f3fbe", 00:20:01.613 "cache": "nvc0n1p0" 00:20:01.613 } 00:20:01.613 } 00:20:01.613 } 00:20:01.613 ] 00:20:01.613 17:06:35 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:20:01.613 17:06:35 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:20:01.613 17:06:35 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:01.871 17:06:35 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:20:01.871 17:06:35 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:20:01.871 17:06:36 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:20:01.871 { 00:20:01.871 "name": "ftl0", 00:20:01.871 "aliases": [ 00:20:01.871 "c197495b-421a-4479-a175-3609e74ac63a" 00:20:01.871 ], 00:20:01.871 "product_name": "FTL disk", 00:20:01.871 "block_size": 4096, 00:20:01.871 "num_blocks": 23592960, 00:20:01.871 "uuid": "c197495b-421a-4479-a175-3609e74ac63a", 00:20:01.871 "assigned_rate_limits": { 00:20:01.871 "rw_ios_per_sec": 0, 00:20:01.871 "rw_mbytes_per_sec": 0, 00:20:01.871 "r_mbytes_per_sec": 0, 00:20:01.871 "w_mbytes_per_sec": 0 00:20:01.871 }, 00:20:01.871 "claimed": false, 00:20:01.871 "zoned": false, 00:20:01.871 "supported_io_types": { 00:20:01.871 "read": true, 00:20:01.871 "write": true, 00:20:01.871 "unmap": true, 00:20:01.871 "flush": true, 00:20:01.871 "reset": false, 00:20:01.871 "nvme_admin": false, 00:20:01.871 "nvme_io": false, 00:20:01.871 "nvme_io_md": false, 00:20:01.871 "write_zeroes": true, 00:20:01.871 "zcopy": false, 00:20:01.871 "get_zone_info": false, 00:20:01.871 "zone_management": false, 00:20:01.871 "zone_append": false, 00:20:01.871 "compare": false, 00:20:01.871 "compare_and_write": false, 00:20:01.871 "abort": false, 00:20:01.871 "seek_hole": false, 00:20:01.871 "seek_data": false, 00:20:01.871 "copy": false, 00:20:01.871 "nvme_iov_md": false 00:20:01.871 }, 00:20:01.871 "driver_specific": { 00:20:01.871 "ftl": { 00:20:01.871 "base_bdev": "db768b98-f9bf-442c-aeb5-1dc20a9f3fbe", 00:20:01.871 "cache": "nvc0n1p0" 00:20:01.871 } 00:20:01.871 } 00:20:01.871 } 00:20:01.871 ]' 00:20:01.871 17:06:36 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:20:01.871 17:06:36 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:20:01.871 17:06:36 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:02.128 [2024-12-05 17:06:36.390923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.128 [2024-12-05 17:06:36.390978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:02.128 [2024-12-05 17:06:36.390993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:02.128 [2024-12-05 17:06:36.391006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.129 [2024-12-05 17:06:36.391047] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:02.129 [2024-12-05 17:06:36.393642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.129 [2024-12-05 17:06:36.393775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:02.129 [2024-12-05 17:06:36.393797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.579 ms 00:20:02.129 [2024-12-05 17:06:36.393805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.129 [2024-12-05 17:06:36.394365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.129 [2024-12-05 17:06:36.394381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:02.129 [2024-12-05 17:06:36.394392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.522 ms 00:20:02.129 [2024-12-05 17:06:36.394399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.129 [2024-12-05 17:06:36.398059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.129 [2024-12-05 17:06:36.398081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:02.129 [2024-12-05 17:06:36.398092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.632 ms 00:20:02.129 [2024-12-05 17:06:36.398100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.129 [2024-12-05 17:06:36.405232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.129 [2024-12-05 17:06:36.405260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:02.129 [2024-12-05 17:06:36.405271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.063 ms 00:20:02.129 [2024-12-05 17:06:36.405279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.129 [2024-12-05 17:06:36.428859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.129 [2024-12-05 17:06:36.428995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:02.129 [2024-12-05 17:06:36.429017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.504 ms 00:20:02.129 [2024-12-05 17:06:36.429025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.129 [2024-12-05 17:06:36.444216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.129 [2024-12-05 17:06:36.444329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:02.129 [2024-12-05 17:06:36.444349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.134 ms 00:20:02.129 [2024-12-05 17:06:36.444359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.129 [2024-12-05 17:06:36.444552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.129 [2024-12-05 17:06:36.444563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:02.129 [2024-12-05 17:06:36.444572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:20:02.129 [2024-12-05 17:06:36.444579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.129 [2024-12-05 17:06:36.467662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.129 [2024-12-05 17:06:36.467768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:02.129 [2024-12-05 17:06:36.467785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.046 ms 00:20:02.129 [2024-12-05 17:06:36.467792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.129 [2024-12-05 17:06:36.490366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.129 [2024-12-05 17:06:36.490469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:02.129 [2024-12-05 17:06:36.490488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.522 ms 00:20:02.129 [2024-12-05 17:06:36.490496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.387 [2024-12-05 17:06:36.512741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.387 [2024-12-05 17:06:36.512771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:02.387 [2024-12-05 17:06:36.512783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.174 ms 00:20:02.387 [2024-12-05 17:06:36.512790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.387 [2024-12-05 17:06:36.535122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.387 [2024-12-05 17:06:36.535152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:02.387 [2024-12-05 17:06:36.535163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.219 ms 00:20:02.387 [2024-12-05 17:06:36.535170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.387 [2024-12-05 17:06:36.535228] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:02.387 [2024-12-05 17:06:36.535242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:02.387 [2024-12-05 17:06:36.535557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.535999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.536008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.536015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.536024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.536031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.536040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.536047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.536057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.536065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.536073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.536081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.536089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.536098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.536127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:02.388 [2024-12-05 17:06:36.536142] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:02.388 [2024-12-05 17:06:36.536153] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c197495b-421a-4479-a175-3609e74ac63a 00:20:02.388 [2024-12-05 17:06:36.536161] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:02.388 [2024-12-05 17:06:36.536170] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:02.388 [2024-12-05 17:06:36.536177] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:02.388 [2024-12-05 17:06:36.536188] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:02.388 [2024-12-05 17:06:36.536195] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:02.388 [2024-12-05 17:06:36.536204] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:02.388 [2024-12-05 17:06:36.536211] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:02.389 [2024-12-05 17:06:36.536218] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:02.389 [2024-12-05 17:06:36.536224] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:02.389 [2024-12-05 17:06:36.536233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.389 [2024-12-05 17:06:36.536240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:02.389 [2024-12-05 17:06:36.536250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.006 ms 00:20:02.389 [2024-12-05 17:06:36.536263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.389 [2024-12-05 17:06:36.548536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.389 [2024-12-05 17:06:36.548566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:02.389 [2024-12-05 17:06:36.548580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.223 ms 00:20:02.389 [2024-12-05 17:06:36.548587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.389 [2024-12-05 17:06:36.548978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.389 [2024-12-05 17:06:36.548990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:02.389 [2024-12-05 17:06:36.549000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:20:02.389 [2024-12-05 17:06:36.549007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.389 [2024-12-05 17:06:36.592657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.389 [2024-12-05 17:06:36.592696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:02.389 [2024-12-05 17:06:36.592708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.389 [2024-12-05 17:06:36.592716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.389 [2024-12-05 17:06:36.592822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.389 [2024-12-05 17:06:36.592831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:02.389 [2024-12-05 17:06:36.592841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.389 [2024-12-05 17:06:36.592848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.389 [2024-12-05 17:06:36.592908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.389 [2024-12-05 17:06:36.592917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:02.389 [2024-12-05 17:06:36.592930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.389 [2024-12-05 17:06:36.592938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.389 [2024-12-05 17:06:36.593121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.389 [2024-12-05 17:06:36.593153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:02.389 [2024-12-05 17:06:36.593175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.389 [2024-12-05 17:06:36.593194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.389 [2024-12-05 17:06:36.665255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.389 [2024-12-05 17:06:36.665388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:02.389 [2024-12-05 17:06:36.665430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.389 [2024-12-05 17:06:36.665448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.389 [2024-12-05 17:06:36.713529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.389 [2024-12-05 17:06:36.713565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:02.389 [2024-12-05 17:06:36.713576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.389 [2024-12-05 17:06:36.713582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.389 [2024-12-05 17:06:36.713665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.389 [2024-12-05 17:06:36.713672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:02.389 [2024-12-05 17:06:36.713682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.389 [2024-12-05 17:06:36.713690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.389 [2024-12-05 17:06:36.713732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.389 [2024-12-05 17:06:36.713738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:02.389 [2024-12-05 17:06:36.713745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.389 [2024-12-05 17:06:36.713751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.389 [2024-12-05 17:06:36.713843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.389 [2024-12-05 17:06:36.713851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:02.389 [2024-12-05 17:06:36.713859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.389 [2024-12-05 17:06:36.713866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.389 [2024-12-05 17:06:36.713907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.389 [2024-12-05 17:06:36.713914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:02.389 [2024-12-05 17:06:36.713922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.389 [2024-12-05 17:06:36.713927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.389 [2024-12-05 17:06:36.713990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.389 [2024-12-05 17:06:36.713997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:02.389 [2024-12-05 17:06:36.714007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.389 [2024-12-05 17:06:36.714013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.389 [2024-12-05 17:06:36.714060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:02.389 [2024-12-05 17:06:36.714068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:02.389 [2024-12-05 17:06:36.714075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:02.389 [2024-12-05 17:06:36.714081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.389 [2024-12-05 17:06:36.714236] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 323.307 ms, result 0 00:20:02.389 true 00:20:02.389 17:06:36 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76286 00:20:02.389 17:06:36 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76286 ']' 00:20:02.389 17:06:36 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76286 00:20:02.389 17:06:36 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:02.389 17:06:36 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:02.389 17:06:36 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76286 00:20:02.647 killing process with pid 76286 00:20:02.647 17:06:36 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:02.647 17:06:36 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:02.647 17:06:36 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76286' 00:20:02.647 17:06:36 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76286 00:20:02.647 17:06:36 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76286 00:20:09.207 17:06:42 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:20:09.470 65536+0 records in 00:20:09.470 65536+0 records out 00:20:09.470 268435456 bytes (268 MB, 256 MiB) copied, 1.08141 s, 248 MB/s 00:20:09.470 17:06:43 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:09.470 [2024-12-05 17:06:43.689584] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:20:09.470 [2024-12-05 17:06:43.689723] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76469 ] 00:20:09.731 [2024-12-05 17:06:43.850633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.731 [2024-12-05 17:06:43.936422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.993 [2024-12-05 17:06:44.146037] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:09.993 [2024-12-05 17:06:44.146087] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:09.993 [2024-12-05 17:06:44.303677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.993 [2024-12-05 17:06:44.303726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:09.993 [2024-12-05 17:06:44.303740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:09.993 [2024-12-05 17:06:44.303748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.993 [2024-12-05 17:06:44.306614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.993 [2024-12-05 17:06:44.306799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:09.993 [2024-12-05 17:06:44.306823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.846 ms 00:20:09.993 [2024-12-05 17:06:44.306837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.993 [2024-12-05 17:06:44.307093] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:09.993 [2024-12-05 17:06:44.307806] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:09.993 [2024-12-05 17:06:44.307848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.993 [2024-12-05 17:06:44.307863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:09.993 [2024-12-05 17:06:44.307876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:20:09.993 [2024-12-05 17:06:44.307888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.993 [2024-12-05 17:06:44.309325] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:09.993 [2024-12-05 17:06:44.322867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.993 [2024-12-05 17:06:44.322914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:09.993 [2024-12-05 17:06:44.322927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.545 ms 00:20:09.993 [2024-12-05 17:06:44.322934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.993 [2024-12-05 17:06:44.323052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.993 [2024-12-05 17:06:44.323064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:09.993 [2024-12-05 17:06:44.323074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:09.993 [2024-12-05 17:06:44.323081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.993 [2024-12-05 17:06:44.329016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.993 [2024-12-05 17:06:44.329186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:09.993 [2024-12-05 17:06:44.329207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.892 ms 00:20:09.993 [2024-12-05 17:06:44.329219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.993 [2024-12-05 17:06:44.329333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.993 [2024-12-05 17:06:44.329350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:09.993 [2024-12-05 17:06:44.329363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:09.993 [2024-12-05 17:06:44.329374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.993 [2024-12-05 17:06:44.329412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.993 [2024-12-05 17:06:44.329426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:09.993 [2024-12-05 17:06:44.329438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:09.993 [2024-12-05 17:06:44.329447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.993 [2024-12-05 17:06:44.329467] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:09.993 [2024-12-05 17:06:44.332967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.993 [2024-12-05 17:06:44.333001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:09.993 [2024-12-05 17:06:44.333011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.504 ms 00:20:09.993 [2024-12-05 17:06:44.333018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.993 [2024-12-05 17:06:44.333077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.993 [2024-12-05 17:06:44.333087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:09.993 [2024-12-05 17:06:44.333095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:09.993 [2024-12-05 17:06:44.333103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.993 [2024-12-05 17:06:44.333123] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:09.993 [2024-12-05 17:06:44.333142] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:09.993 [2024-12-05 17:06:44.333178] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:09.993 [2024-12-05 17:06:44.333194] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:09.993 [2024-12-05 17:06:44.333297] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:09.993 [2024-12-05 17:06:44.333308] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:09.993 [2024-12-05 17:06:44.333319] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:09.993 [2024-12-05 17:06:44.333332] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:09.993 [2024-12-05 17:06:44.333340] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:09.993 [2024-12-05 17:06:44.333350] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:09.993 [2024-12-05 17:06:44.333358] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:09.993 [2024-12-05 17:06:44.333365] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:09.993 [2024-12-05 17:06:44.333373] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:09.993 [2024-12-05 17:06:44.333380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.993 [2024-12-05 17:06:44.333389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:09.993 [2024-12-05 17:06:44.333396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:20:09.993 [2024-12-05 17:06:44.333403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.993 [2024-12-05 17:06:44.333508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.993 [2024-12-05 17:06:44.333521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:09.993 [2024-12-05 17:06:44.333530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:09.993 [2024-12-05 17:06:44.333537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:09.993 [2024-12-05 17:06:44.333637] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:09.993 [2024-12-05 17:06:44.333648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:09.993 [2024-12-05 17:06:44.333656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:09.993 [2024-12-05 17:06:44.333664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:09.993 [2024-12-05 17:06:44.333671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:09.993 [2024-12-05 17:06:44.333678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:09.993 [2024-12-05 17:06:44.333685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:09.993 [2024-12-05 17:06:44.333693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:09.993 [2024-12-05 17:06:44.333701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:09.993 [2024-12-05 17:06:44.333710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:09.993 [2024-12-05 17:06:44.333717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:09.993 [2024-12-05 17:06:44.333730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:09.993 [2024-12-05 17:06:44.333736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:09.993 [2024-12-05 17:06:44.333743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:09.993 [2024-12-05 17:06:44.333753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:09.993 [2024-12-05 17:06:44.333760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:09.993 [2024-12-05 17:06:44.333767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:09.993 [2024-12-05 17:06:44.333773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:09.993 [2024-12-05 17:06:44.333780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:09.993 [2024-12-05 17:06:44.333786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:09.993 [2024-12-05 17:06:44.333792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:09.993 [2024-12-05 17:06:44.333799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:09.993 [2024-12-05 17:06:44.333806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:09.993 [2024-12-05 17:06:44.333813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:09.993 [2024-12-05 17:06:44.333820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:09.993 [2024-12-05 17:06:44.333826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:09.993 [2024-12-05 17:06:44.333833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:09.993 [2024-12-05 17:06:44.333839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:09.994 [2024-12-05 17:06:44.333846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:09.994 [2024-12-05 17:06:44.333853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:09.994 [2024-12-05 17:06:44.333860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:09.994 [2024-12-05 17:06:44.333866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:09.994 [2024-12-05 17:06:44.333873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:09.994 [2024-12-05 17:06:44.333879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:09.994 [2024-12-05 17:06:44.333885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:09.994 [2024-12-05 17:06:44.333892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:09.994 [2024-12-05 17:06:44.333898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:09.994 [2024-12-05 17:06:44.333905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:09.994 [2024-12-05 17:06:44.333912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:09.994 [2024-12-05 17:06:44.333918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:09.994 [2024-12-05 17:06:44.333925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:09.994 [2024-12-05 17:06:44.333931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:09.994 [2024-12-05 17:06:44.333938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:09.994 [2024-12-05 17:06:44.333946] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:09.994 [2024-12-05 17:06:44.333976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:09.994 [2024-12-05 17:06:44.333987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:09.994 [2024-12-05 17:06:44.333997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:09.994 [2024-12-05 17:06:44.334004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:09.994 [2024-12-05 17:06:44.334011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:09.994 [2024-12-05 17:06:44.334018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:09.994 [2024-12-05 17:06:44.334025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:09.994 [2024-12-05 17:06:44.334033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:09.994 [2024-12-05 17:06:44.334039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:09.994 [2024-12-05 17:06:44.334048] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:09.994 [2024-12-05 17:06:44.334058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:09.994 [2024-12-05 17:06:44.334067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:09.994 [2024-12-05 17:06:44.334075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:09.994 [2024-12-05 17:06:44.334115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:09.994 [2024-12-05 17:06:44.334123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:09.994 [2024-12-05 17:06:44.334130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:09.994 [2024-12-05 17:06:44.334138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:09.994 [2024-12-05 17:06:44.334145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:09.994 [2024-12-05 17:06:44.334152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:09.994 [2024-12-05 17:06:44.334159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:09.994 [2024-12-05 17:06:44.334166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:09.994 [2024-12-05 17:06:44.334173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:09.994 [2024-12-05 17:06:44.334179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:09.994 [2024-12-05 17:06:44.334186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:09.994 [2024-12-05 17:06:44.334195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:09.994 [2024-12-05 17:06:44.334202] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:09.994 [2024-12-05 17:06:44.334210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:09.994 [2024-12-05 17:06:44.334218] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:09.994 [2024-12-05 17:06:44.334226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:09.994 [2024-12-05 17:06:44.334233] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:09.994 [2024-12-05 17:06:44.334239] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:09.994 [2024-12-05 17:06:44.334247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:09.994 [2024-12-05 17:06:44.334258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:09.994 [2024-12-05 17:06:44.334265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:20:09.994 [2024-12-05 17:06:44.334273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.256 [2024-12-05 17:06:44.362371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.256 [2024-12-05 17:06:44.362414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:10.256 [2024-12-05 17:06:44.362426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.043 ms 00:20:10.256 [2024-12-05 17:06:44.362435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.256 [2024-12-05 17:06:44.362562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.256 [2024-12-05 17:06:44.362573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:10.256 [2024-12-05 17:06:44.362583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:20:10.256 [2024-12-05 17:06:44.362590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.256 [2024-12-05 17:06:44.405418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.256 [2024-12-05 17:06:44.405468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:10.256 [2024-12-05 17:06:44.405484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.804 ms 00:20:10.256 [2024-12-05 17:06:44.405493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.256 [2024-12-05 17:06:44.405599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.256 [2024-12-05 17:06:44.405611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:10.256 [2024-12-05 17:06:44.405621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:10.256 [2024-12-05 17:06:44.405630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.256 [2024-12-05 17:06:44.406119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.256 [2024-12-05 17:06:44.406142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:10.256 [2024-12-05 17:06:44.406160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.465 ms 00:20:10.256 [2024-12-05 17:06:44.406168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.256 [2024-12-05 17:06:44.406314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.256 [2024-12-05 17:06:44.406326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:10.256 [2024-12-05 17:06:44.406334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:20:10.256 [2024-12-05 17:06:44.406342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.256 [2024-12-05 17:06:44.422416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.256 [2024-12-05 17:06:44.422465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:10.256 [2024-12-05 17:06:44.422477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.052 ms 00:20:10.256 [2024-12-05 17:06:44.422485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.256 [2024-12-05 17:06:44.437193] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:10.256 [2024-12-05 17:06:44.437248] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:10.256 [2024-12-05 17:06:44.437261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.256 [2024-12-05 17:06:44.437271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:10.256 [2024-12-05 17:06:44.437281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.656 ms 00:20:10.256 [2024-12-05 17:06:44.437289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.256 [2024-12-05 17:06:44.463854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.257 [2024-12-05 17:06:44.463907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:10.257 [2024-12-05 17:06:44.463921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.457 ms 00:20:10.257 [2024-12-05 17:06:44.463930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.257 [2024-12-05 17:06:44.477865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.257 [2024-12-05 17:06:44.477918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:10.257 [2024-12-05 17:06:44.477932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.810 ms 00:20:10.257 [2024-12-05 17:06:44.477939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.257 [2024-12-05 17:06:44.491371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.257 [2024-12-05 17:06:44.491604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:10.257 [2024-12-05 17:06:44.491634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.324 ms 00:20:10.257 [2024-12-05 17:06:44.491646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.257 [2024-12-05 17:06:44.492542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.257 [2024-12-05 17:06:44.492599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:10.257 [2024-12-05 17:06:44.492617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:20:10.257 [2024-12-05 17:06:44.492626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.257 [2024-12-05 17:06:44.561703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.257 [2024-12-05 17:06:44.561762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:10.257 [2024-12-05 17:06:44.561777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.042 ms 00:20:10.257 [2024-12-05 17:06:44.561787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.257 [2024-12-05 17:06:44.573247] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:10.257 [2024-12-05 17:06:44.593013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.257 [2024-12-05 17:06:44.593061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:10.257 [2024-12-05 17:06:44.593075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.119 ms 00:20:10.257 [2024-12-05 17:06:44.593084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.257 [2024-12-05 17:06:44.593189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.257 [2024-12-05 17:06:44.593200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:10.257 [2024-12-05 17:06:44.593211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:10.257 [2024-12-05 17:06:44.593220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.257 [2024-12-05 17:06:44.593278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.257 [2024-12-05 17:06:44.593290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:10.257 [2024-12-05 17:06:44.593299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:10.257 [2024-12-05 17:06:44.593308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.257 [2024-12-05 17:06:44.593343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.257 [2024-12-05 17:06:44.593357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:10.257 [2024-12-05 17:06:44.593366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:10.257 [2024-12-05 17:06:44.593374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.257 [2024-12-05 17:06:44.593413] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:10.257 [2024-12-05 17:06:44.593425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.257 [2024-12-05 17:06:44.593434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:10.257 [2024-12-05 17:06:44.593442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:10.257 [2024-12-05 17:06:44.593450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.257 [2024-12-05 17:06:44.619963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.257 [2024-12-05 17:06:44.620212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:10.257 [2024-12-05 17:06:44.620242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.492 ms 00:20:10.257 [2024-12-05 17:06:44.620257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.257 [2024-12-05 17:06:44.620454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.257 [2024-12-05 17:06:44.620470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:10.257 [2024-12-05 17:06:44.620481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:10.257 [2024-12-05 17:06:44.620490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.257 [2024-12-05 17:06:44.621732] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:10.519 [2024-12-05 17:06:44.625398] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 317.667 ms, result 0 00:20:10.519 [2024-12-05 17:06:44.626679] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:10.519 [2024-12-05 17:06:44.640932] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:11.462  [2024-12-05T17:06:46.774Z] Copying: 16/256 [MB] (16 MBps) [2024-12-05T17:06:47.719Z] Copying: 32/256 [MB] (16 MBps) [2024-12-05T17:06:48.661Z] Copying: 50/256 [MB] (17 MBps) [2024-12-05T17:06:50.048Z] Copying: 68/256 [MB] (18 MBps) [2024-12-05T17:06:50.992Z] Copying: 85/256 [MB] (16 MBps) [2024-12-05T17:06:51.652Z] Copying: 102/256 [MB] (17 MBps) [2024-12-05T17:06:53.039Z] Copying: 120/256 [MB] (17 MBps) [2024-12-05T17:06:53.982Z] Copying: 136/256 [MB] (16 MBps) [2024-12-05T17:06:54.927Z] Copying: 149/256 [MB] (12 MBps) [2024-12-05T17:06:55.870Z] Copying: 164/256 [MB] (15 MBps) [2024-12-05T17:06:56.813Z] Copying: 179/256 [MB] (14 MBps) [2024-12-05T17:06:57.756Z] Copying: 189/256 [MB] (10 MBps) [2024-12-05T17:06:58.700Z] Copying: 203/256 [MB] (13 MBps) [2024-12-05T17:07:00.089Z] Copying: 213/256 [MB] (10 MBps) [2024-12-05T17:07:00.663Z] Copying: 234/256 [MB] (20 MBps) [2024-12-05T17:07:00.924Z] Copying: 253/256 [MB] (19 MBps) [2024-12-05T17:07:00.924Z] Copying: 256/256 [MB] (average 15 MBps)[2024-12-05 17:07:00.885707] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:26.557 [2024-12-05 17:07:00.895938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.557 [2024-12-05 17:07:00.895996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:26.557 [2024-12-05 17:07:00.896014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:26.557 [2024-12-05 17:07:00.896030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.557 [2024-12-05 17:07:00.896055] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:26.557 [2024-12-05 17:07:00.900539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.557 [2024-12-05 17:07:00.900628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:26.557 [2024-12-05 17:07:00.900650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.454 ms 00:20:26.557 [2024-12-05 17:07:00.900662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.557 [2024-12-05 17:07:00.905207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.557 [2024-12-05 17:07:00.905327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:26.557 [2024-12-05 17:07:00.905365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.420 ms 00:20:26.557 [2024-12-05 17:07:00.905389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.557 [2024-12-05 17:07:00.919783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.557 [2024-12-05 17:07:00.920022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:26.557 [2024-12-05 17:07:00.920046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.347 ms 00:20:26.557 [2024-12-05 17:07:00.920055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.820 [2024-12-05 17:07:00.927058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.820 [2024-12-05 17:07:00.927237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:26.820 [2024-12-05 17:07:00.927258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.939 ms 00:20:26.820 [2024-12-05 17:07:00.927267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.820 [2024-12-05 17:07:00.953576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.820 [2024-12-05 17:07:00.953624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:26.820 [2024-12-05 17:07:00.953637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.252 ms 00:20:26.820 [2024-12-05 17:07:00.953645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.820 [2024-12-05 17:07:00.970793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.820 [2024-12-05 17:07:00.970850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:26.820 [2024-12-05 17:07:00.970867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.095 ms 00:20:26.820 [2024-12-05 17:07:00.970876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.820 [2024-12-05 17:07:00.971065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.820 [2024-12-05 17:07:00.971080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:26.820 [2024-12-05 17:07:00.971091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:20:26.820 [2024-12-05 17:07:00.971109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.820 [2024-12-05 17:07:00.997210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.820 [2024-12-05 17:07:00.997407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:26.820 [2024-12-05 17:07:00.997429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.082 ms 00:20:26.820 [2024-12-05 17:07:00.997437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.820 [2024-12-05 17:07:01.022853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.820 [2024-12-05 17:07:01.022900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:26.820 [2024-12-05 17:07:01.022912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.287 ms 00:20:26.820 [2024-12-05 17:07:01.022920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.820 [2024-12-05 17:07:01.048007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.820 [2024-12-05 17:07:01.048201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:26.820 [2024-12-05 17:07:01.048223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.990 ms 00:20:26.820 [2024-12-05 17:07:01.048232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.820 [2024-12-05 17:07:01.074102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.820 [2024-12-05 17:07:01.074152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:26.820 [2024-12-05 17:07:01.074165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.545 ms 00:20:26.820 [2024-12-05 17:07:01.074172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.820 [2024-12-05 17:07:01.074238] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:26.820 [2024-12-05 17:07:01.074255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:26.820 [2024-12-05 17:07:01.074800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.074807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.074814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.074822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.074829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.074836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.074844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.074852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.074860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.074867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.074875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.074883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.074890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.074897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.074905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.074912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.074919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.074928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.074936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.074977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.074987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.074995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:26.821 [2024-12-05 17:07:01.075329] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:26.821 [2024-12-05 17:07:01.075338] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c197495b-421a-4479-a175-3609e74ac63a 00:20:26.821 [2024-12-05 17:07:01.075347] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:26.821 [2024-12-05 17:07:01.075356] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:26.821 [2024-12-05 17:07:01.075363] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:26.821 [2024-12-05 17:07:01.075371] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:26.821 [2024-12-05 17:07:01.075378] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:26.821 [2024-12-05 17:07:01.075386] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:26.821 [2024-12-05 17:07:01.075394] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:26.821 [2024-12-05 17:07:01.075400] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:26.821 [2024-12-05 17:07:01.075409] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:26.821 [2024-12-05 17:07:01.075418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.821 [2024-12-05 17:07:01.075429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:26.821 [2024-12-05 17:07:01.075437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.181 ms 00:20:26.821 [2024-12-05 17:07:01.075445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.821 [2024-12-05 17:07:01.089417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.821 [2024-12-05 17:07:01.089461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:26.821 [2024-12-05 17:07:01.089473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.952 ms 00:20:26.821 [2024-12-05 17:07:01.089482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.821 [2024-12-05 17:07:01.089885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:26.821 [2024-12-05 17:07:01.089897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:26.821 [2024-12-05 17:07:01.089906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:20:26.821 [2024-12-05 17:07:01.089915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.821 [2024-12-05 17:07:01.129135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.821 [2024-12-05 17:07:01.129330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:26.821 [2024-12-05 17:07:01.129350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.821 [2024-12-05 17:07:01.129359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.821 [2024-12-05 17:07:01.129468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.821 [2024-12-05 17:07:01.129478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:26.821 [2024-12-05 17:07:01.129489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.821 [2024-12-05 17:07:01.129496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.821 [2024-12-05 17:07:01.129557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.821 [2024-12-05 17:07:01.129568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:26.821 [2024-12-05 17:07:01.129576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.821 [2024-12-05 17:07:01.129584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:26.821 [2024-12-05 17:07:01.129601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:26.821 [2024-12-05 17:07:01.129613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:26.821 [2024-12-05 17:07:01.129622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:26.821 [2024-12-05 17:07:01.129630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.082 [2024-12-05 17:07:01.215008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.082 [2024-12-05 17:07:01.215057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:27.082 [2024-12-05 17:07:01.215070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.082 [2024-12-05 17:07:01.215078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.082 [2024-12-05 17:07:01.286381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.082 [2024-12-05 17:07:01.286438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:27.082 [2024-12-05 17:07:01.286452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.082 [2024-12-05 17:07:01.286461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.082 [2024-12-05 17:07:01.286551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.082 [2024-12-05 17:07:01.286561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:27.082 [2024-12-05 17:07:01.286570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.082 [2024-12-05 17:07:01.286579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.082 [2024-12-05 17:07:01.286614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.082 [2024-12-05 17:07:01.286624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:27.082 [2024-12-05 17:07:01.286641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.082 [2024-12-05 17:07:01.286649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.082 [2024-12-05 17:07:01.286751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.082 [2024-12-05 17:07:01.286764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:27.082 [2024-12-05 17:07:01.286773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.082 [2024-12-05 17:07:01.286780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.082 [2024-12-05 17:07:01.286814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.082 [2024-12-05 17:07:01.286824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:27.082 [2024-12-05 17:07:01.286833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.083 [2024-12-05 17:07:01.286844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.083 [2024-12-05 17:07:01.286892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.083 [2024-12-05 17:07:01.286904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:27.083 [2024-12-05 17:07:01.286913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.083 [2024-12-05 17:07:01.286923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.083 [2024-12-05 17:07:01.287013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.083 [2024-12-05 17:07:01.287024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:27.083 [2024-12-05 17:07:01.287039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.083 [2024-12-05 17:07:01.287048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.083 [2024-12-05 17:07:01.287213] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 391.258 ms, result 0 00:20:28.025 00:20:28.025 00:20:28.025 17:07:02 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76662 00:20:28.025 17:07:02 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76662 00:20:28.025 17:07:02 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:28.025 17:07:02 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76662 ']' 00:20:28.025 17:07:02 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.025 17:07:02 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:28.025 17:07:02 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.025 17:07:02 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:28.025 17:07:02 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:28.025 [2024-12-05 17:07:02.212143] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:20:28.025 [2024-12-05 17:07:02.212284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76662 ] 00:20:28.025 [2024-12-05 17:07:02.375536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.287 [2024-12-05 17:07:02.511064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.878 17:07:03 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:28.878 17:07:03 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:28.878 17:07:03 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:29.138 [2024-12-05 17:07:03.428467] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:29.139 [2024-12-05 17:07:03.428554] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:29.399 [2024-12-05 17:07:03.605099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.399 [2024-12-05 17:07:03.605162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:29.399 [2024-12-05 17:07:03.605180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:29.399 [2024-12-05 17:07:03.605190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.399 [2024-12-05 17:07:03.608278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.399 [2024-12-05 17:07:03.608332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:29.399 [2024-12-05 17:07:03.608345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.065 ms 00:20:29.400 [2024-12-05 17:07:03.608354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.400 [2024-12-05 17:07:03.608503] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:29.400 [2024-12-05 17:07:03.609333] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:29.400 [2024-12-05 17:07:03.609371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.400 [2024-12-05 17:07:03.609380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:29.400 [2024-12-05 17:07:03.609392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.883 ms 00:20:29.400 [2024-12-05 17:07:03.609401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.400 [2024-12-05 17:07:03.611212] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:29.400 [2024-12-05 17:07:03.625798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.400 [2024-12-05 17:07:03.625858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:29.400 [2024-12-05 17:07:03.625874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.591 ms 00:20:29.400 [2024-12-05 17:07:03.625886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.400 [2024-12-05 17:07:03.626034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.400 [2024-12-05 17:07:03.626051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:29.400 [2024-12-05 17:07:03.626063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:29.400 [2024-12-05 17:07:03.626074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.400 [2024-12-05 17:07:03.634972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.400 [2024-12-05 17:07:03.635021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:29.400 [2024-12-05 17:07:03.635031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.838 ms 00:20:29.400 [2024-12-05 17:07:03.635042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.400 [2024-12-05 17:07:03.635197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.400 [2024-12-05 17:07:03.635212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:29.400 [2024-12-05 17:07:03.635225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:20:29.400 [2024-12-05 17:07:03.635236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.400 [2024-12-05 17:07:03.635265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.400 [2024-12-05 17:07:03.635276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:29.400 [2024-12-05 17:07:03.635284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:29.400 [2024-12-05 17:07:03.635294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.400 [2024-12-05 17:07:03.635319] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:29.400 [2024-12-05 17:07:03.639577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.400 [2024-12-05 17:07:03.639761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:29.400 [2024-12-05 17:07:03.640011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.260 ms 00:20:29.400 [2024-12-05 17:07:03.640060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.400 [2024-12-05 17:07:03.640168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.400 [2024-12-05 17:07:03.640195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:29.400 [2024-12-05 17:07:03.640223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:29.400 [2024-12-05 17:07:03.640243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.400 [2024-12-05 17:07:03.640286] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:29.400 [2024-12-05 17:07:03.640333] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:29.400 [2024-12-05 17:07:03.640405] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:29.400 [2024-12-05 17:07:03.640446] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:29.400 [2024-12-05 17:07:03.640580] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:29.400 [2024-12-05 17:07:03.640723] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:29.400 [2024-12-05 17:07:03.640763] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:29.400 [2024-12-05 17:07:03.640796] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:29.400 [2024-12-05 17:07:03.640831] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:29.400 [2024-12-05 17:07:03.640860] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:29.400 [2024-12-05 17:07:03.640881] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:29.400 [2024-12-05 17:07:03.640903] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:29.400 [2024-12-05 17:07:03.641137] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:29.400 [2024-12-05 17:07:03.641163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.400 [2024-12-05 17:07:03.641177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:29.400 [2024-12-05 17:07:03.641188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.882 ms 00:20:29.400 [2024-12-05 17:07:03.641202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.400 [2024-12-05 17:07:03.641304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.400 [2024-12-05 17:07:03.641317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:29.400 [2024-12-05 17:07:03.641327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:29.400 [2024-12-05 17:07:03.641338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.400 [2024-12-05 17:07:03.641444] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:29.400 [2024-12-05 17:07:03.641460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:29.400 [2024-12-05 17:07:03.641470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:29.400 [2024-12-05 17:07:03.641483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.400 [2024-12-05 17:07:03.641495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:29.400 [2024-12-05 17:07:03.641505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:29.400 [2024-12-05 17:07:03.641513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:29.400 [2024-12-05 17:07:03.641525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:29.400 [2024-12-05 17:07:03.641533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:29.400 [2024-12-05 17:07:03.641543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:29.400 [2024-12-05 17:07:03.641550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:29.400 [2024-12-05 17:07:03.641559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:29.400 [2024-12-05 17:07:03.641567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:29.400 [2024-12-05 17:07:03.641577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:29.400 [2024-12-05 17:07:03.641584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:29.400 [2024-12-05 17:07:03.641594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.400 [2024-12-05 17:07:03.641602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:29.400 [2024-12-05 17:07:03.641612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:29.400 [2024-12-05 17:07:03.641627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.400 [2024-12-05 17:07:03.641636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:29.400 [2024-12-05 17:07:03.641644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:29.400 [2024-12-05 17:07:03.641654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.400 [2024-12-05 17:07:03.641661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:29.400 [2024-12-05 17:07:03.641672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:29.400 [2024-12-05 17:07:03.641679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.400 [2024-12-05 17:07:03.641688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:29.400 [2024-12-05 17:07:03.641697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:29.400 [2024-12-05 17:07:03.641706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.400 [2024-12-05 17:07:03.641713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:29.400 [2024-12-05 17:07:03.641723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:29.400 [2024-12-05 17:07:03.641730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:29.400 [2024-12-05 17:07:03.641739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:29.400 [2024-12-05 17:07:03.641749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:29.400 [2024-12-05 17:07:03.641761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:29.400 [2024-12-05 17:07:03.641768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:29.400 [2024-12-05 17:07:03.641778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:29.400 [2024-12-05 17:07:03.641787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:29.400 [2024-12-05 17:07:03.641796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:29.400 [2024-12-05 17:07:03.641803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:29.400 [2024-12-05 17:07:03.641813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.400 [2024-12-05 17:07:03.641820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:29.400 [2024-12-05 17:07:03.641831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:29.400 [2024-12-05 17:07:03.641837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.400 [2024-12-05 17:07:03.641847] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:29.400 [2024-12-05 17:07:03.641854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:29.400 [2024-12-05 17:07:03.641863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:29.400 [2024-12-05 17:07:03.641872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:29.400 [2024-12-05 17:07:03.641882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:29.400 [2024-12-05 17:07:03.641889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:29.400 [2024-12-05 17:07:03.641898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:29.400 [2024-12-05 17:07:03.641906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:29.400 [2024-12-05 17:07:03.641916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:29.400 [2024-12-05 17:07:03.641923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:29.400 [2024-12-05 17:07:03.641934] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:29.400 [2024-12-05 17:07:03.641944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:29.400 [2024-12-05 17:07:03.641975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:29.400 [2024-12-05 17:07:03.641983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:29.400 [2024-12-05 17:07:03.641993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:29.400 [2024-12-05 17:07:03.642000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:29.400 [2024-12-05 17:07:03.642009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:29.400 [2024-12-05 17:07:03.642017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:29.400 [2024-12-05 17:07:03.642029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:29.400 [2024-12-05 17:07:03.642036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:29.400 [2024-12-05 17:07:03.642046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:29.400 [2024-12-05 17:07:03.642053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:29.400 [2024-12-05 17:07:03.642064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:29.400 [2024-12-05 17:07:03.642073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:29.400 [2024-12-05 17:07:03.642082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:29.400 [2024-12-05 17:07:03.642090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:29.400 [2024-12-05 17:07:03.642100] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:29.400 [2024-12-05 17:07:03.642109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:29.400 [2024-12-05 17:07:03.642124] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:29.400 [2024-12-05 17:07:03.642133] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:29.400 [2024-12-05 17:07:03.642142] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:29.400 [2024-12-05 17:07:03.642150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:29.400 [2024-12-05 17:07:03.642160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.400 [2024-12-05 17:07:03.642169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:29.400 [2024-12-05 17:07:03.642181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.784 ms 00:20:29.400 [2024-12-05 17:07:03.642189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.400 [2024-12-05 17:07:03.675685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.400 [2024-12-05 17:07:03.675876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:29.400 [2024-12-05 17:07:03.676115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.431 ms 00:20:29.400 [2024-12-05 17:07:03.676163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.400 [2024-12-05 17:07:03.676323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.400 [2024-12-05 17:07:03.676426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:29.400 [2024-12-05 17:07:03.676455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:29.400 [2024-12-05 17:07:03.676476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.400 [2024-12-05 17:07:03.712504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.400 [2024-12-05 17:07:03.712763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:29.400 [2024-12-05 17:07:03.712931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.463 ms 00:20:29.400 [2024-12-05 17:07:03.712993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.400 [2024-12-05 17:07:03.713125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.400 [2024-12-05 17:07:03.713156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:29.400 [2024-12-05 17:07:03.713183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:29.400 [2024-12-05 17:07:03.713207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.400 [2024-12-05 17:07:03.713781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.400 [2024-12-05 17:07:03.713968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:29.400 [2024-12-05 17:07:03.714046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:20:29.400 [2024-12-05 17:07:03.714072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.400 [2024-12-05 17:07:03.714241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.400 [2024-12-05 17:07:03.714313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:29.400 [2024-12-05 17:07:03.714340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:20:29.400 [2024-12-05 17:07:03.714360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.400 [2024-12-05 17:07:03.732892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.400 [2024-12-05 17:07:03.733094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:29.400 [2024-12-05 17:07:03.733162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.341 ms 00:20:29.400 [2024-12-05 17:07:03.733189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.400 [2024-12-05 17:07:03.762249] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:29.400 [2024-12-05 17:07:03.762473] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:29.400 [2024-12-05 17:07:03.762719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.400 [2024-12-05 17:07:03.762750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:29.400 [2024-12-05 17:07:03.763132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.381 ms 00:20:29.400 [2024-12-05 17:07:03.763197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.661 [2024-12-05 17:07:03.789962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.661 [2024-12-05 17:07:03.790152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:29.661 [2024-12-05 17:07:03.790181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.610 ms 00:20:29.661 [2024-12-05 17:07:03.790194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.661 [2024-12-05 17:07:03.803691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.661 [2024-12-05 17:07:03.803742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:29.661 [2024-12-05 17:07:03.803763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.311 ms 00:20:29.661 [2024-12-05 17:07:03.803771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.661 [2024-12-05 17:07:03.816761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.661 [2024-12-05 17:07:03.816980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:29.661 [2024-12-05 17:07:03.817008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.892 ms 00:20:29.661 [2024-12-05 17:07:03.817017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.661 [2024-12-05 17:07:03.817782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.661 [2024-12-05 17:07:03.817838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:29.661 [2024-12-05 17:07:03.817854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:20:29.661 [2024-12-05 17:07:03.817864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.661 [2024-12-05 17:07:03.885647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.661 [2024-12-05 17:07:03.885707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:29.661 [2024-12-05 17:07:03.885724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.750 ms 00:20:29.661 [2024-12-05 17:07:03.885733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.661 [2024-12-05 17:07:03.897282] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:29.661 [2024-12-05 17:07:03.916906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.661 [2024-12-05 17:07:03.917177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:29.661 [2024-12-05 17:07:03.917200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.068 ms 00:20:29.661 [2024-12-05 17:07:03.917213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.661 [2024-12-05 17:07:03.917332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.661 [2024-12-05 17:07:03.917347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:29.661 [2024-12-05 17:07:03.917357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:29.661 [2024-12-05 17:07:03.917367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.661 [2024-12-05 17:07:03.917424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.661 [2024-12-05 17:07:03.917437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:29.661 [2024-12-05 17:07:03.917449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:29.661 [2024-12-05 17:07:03.917459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.661 [2024-12-05 17:07:03.917486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.661 [2024-12-05 17:07:03.917501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:29.661 [2024-12-05 17:07:03.917510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:29.661 [2024-12-05 17:07:03.917520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.661 [2024-12-05 17:07:03.917559] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:29.661 [2024-12-05 17:07:03.917580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.661 [2024-12-05 17:07:03.917591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:29.661 [2024-12-05 17:07:03.917602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:29.661 [2024-12-05 17:07:03.917612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.661 [2024-12-05 17:07:03.944390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.662 [2024-12-05 17:07:03.944576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:29.662 [2024-12-05 17:07:03.944605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.746 ms 00:20:29.662 [2024-12-05 17:07:03.944615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.662 [2024-12-05 17:07:03.944756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.662 [2024-12-05 17:07:03.944768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:29.662 [2024-12-05 17:07:03.944785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:29.662 [2024-12-05 17:07:03.944793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.662 [2024-12-05 17:07:03.945911] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:29.662 [2024-12-05 17:07:03.949330] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 340.481 ms, result 0 00:20:29.662 [2024-12-05 17:07:03.951571] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:29.662 Some configs were skipped because the RPC state that can call them passed over. 00:20:29.662 17:07:03 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:29.922 [2024-12-05 17:07:04.196704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:29.922 [2024-12-05 17:07:04.196915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:29.922 [2024-12-05 17:07:04.197388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.213 ms 00:20:29.922 [2024-12-05 17:07:04.197454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:29.922 [2024-12-05 17:07:04.197633] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 4.132 ms, result 0 00:20:29.922 true 00:20:29.922 17:07:04 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:30.183 [2024-12-05 17:07:04.416450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.183 [2024-12-05 17:07:04.416651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:30.183 [2024-12-05 17:07:04.416679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.769 ms 00:20:30.183 [2024-12-05 17:07:04.416702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.183 [2024-12-05 17:07:04.416753] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.076 ms, result 0 00:20:30.183 true 00:20:30.183 17:07:04 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76662 00:20:30.183 17:07:04 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76662 ']' 00:20:30.183 17:07:04 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76662 00:20:30.183 17:07:04 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:30.183 17:07:04 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:30.183 17:07:04 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76662 00:20:30.183 killing process with pid 76662 00:20:30.183 17:07:04 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:30.183 17:07:04 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:30.183 17:07:04 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76662' 00:20:30.183 17:07:04 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76662 00:20:30.183 17:07:04 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76662 00:20:30.755 [2024-12-05 17:07:05.095237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.755 [2024-12-05 17:07:05.095281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:30.755 [2024-12-05 17:07:05.095292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:30.755 [2024-12-05 17:07:05.095301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.755 [2024-12-05 17:07:05.095319] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:30.755 [2024-12-05 17:07:05.097456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.755 [2024-12-05 17:07:05.097481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:30.755 [2024-12-05 17:07:05.097492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.122 ms 00:20:30.755 [2024-12-05 17:07:05.097499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.755 [2024-12-05 17:07:05.097726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.755 [2024-12-05 17:07:05.097734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:30.755 [2024-12-05 17:07:05.097742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:20:30.755 [2024-12-05 17:07:05.097748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.755 [2024-12-05 17:07:05.101028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.755 [2024-12-05 17:07:05.101058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:30.755 [2024-12-05 17:07:05.101067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.263 ms 00:20:30.755 [2024-12-05 17:07:05.101073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.755 [2024-12-05 17:07:05.106334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.755 [2024-12-05 17:07:05.106358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:30.755 [2024-12-05 17:07:05.106369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.229 ms 00:20:30.755 [2024-12-05 17:07:05.106376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:30.755 [2024-12-05 17:07:05.114558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:30.755 [2024-12-05 17:07:05.114587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:30.755 [2024-12-05 17:07:05.114597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.122 ms 00:20:30.755 [2024-12-05 17:07:05.114603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.016 [2024-12-05 17:07:05.121418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.016 [2024-12-05 17:07:05.121451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:31.016 [2024-12-05 17:07:05.121461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.782 ms 00:20:31.016 [2024-12-05 17:07:05.121468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.016 [2024-12-05 17:07:05.121577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.016 [2024-12-05 17:07:05.121585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:31.016 [2024-12-05 17:07:05.121594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:20:31.016 [2024-12-05 17:07:05.121600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.016 [2024-12-05 17:07:05.130322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.016 [2024-12-05 17:07:05.130346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:31.016 [2024-12-05 17:07:05.130355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.705 ms 00:20:31.016 [2024-12-05 17:07:05.130360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.016 [2024-12-05 17:07:05.138646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.016 [2024-12-05 17:07:05.138671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:31.016 [2024-12-05 17:07:05.138683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.254 ms 00:20:31.016 [2024-12-05 17:07:05.138688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.016 [2024-12-05 17:07:05.145881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.016 [2024-12-05 17:07:05.145905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:31.016 [2024-12-05 17:07:05.145914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.151 ms 00:20:31.016 [2024-12-05 17:07:05.145919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.016 [2024-12-05 17:07:05.152965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.017 [2024-12-05 17:07:05.152987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:31.017 [2024-12-05 17:07:05.152995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.959 ms 00:20:31.017 [2024-12-05 17:07:05.153001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.017 [2024-12-05 17:07:05.153031] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:31.017 [2024-12-05 17:07:05.153041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:31.017 [2024-12-05 17:07:05.153581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:31.018 [2024-12-05 17:07:05.153589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:31.018 [2024-12-05 17:07:05.153594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:31.018 [2024-12-05 17:07:05.153601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:31.018 [2024-12-05 17:07:05.153606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:31.018 [2024-12-05 17:07:05.153613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:31.018 [2024-12-05 17:07:05.153619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:31.018 [2024-12-05 17:07:05.153628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:31.018 [2024-12-05 17:07:05.153635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:31.018 [2024-12-05 17:07:05.153643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:31.018 [2024-12-05 17:07:05.153649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:31.018 [2024-12-05 17:07:05.153656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:31.018 [2024-12-05 17:07:05.153662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:31.018 [2024-12-05 17:07:05.153670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:31.018 [2024-12-05 17:07:05.153675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:31.018 [2024-12-05 17:07:05.153682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:31.018 [2024-12-05 17:07:05.153699] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:31.018 [2024-12-05 17:07:05.153709] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c197495b-421a-4479-a175-3609e74ac63a 00:20:31.018 [2024-12-05 17:07:05.153716] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:31.018 [2024-12-05 17:07:05.153723] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:31.018 [2024-12-05 17:07:05.153728] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:31.018 [2024-12-05 17:07:05.153735] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:31.018 [2024-12-05 17:07:05.153740] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:31.018 [2024-12-05 17:07:05.153747] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:31.018 [2024-12-05 17:07:05.153753] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:31.018 [2024-12-05 17:07:05.153759] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:31.018 [2024-12-05 17:07:05.153764] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:31.018 [2024-12-05 17:07:05.153771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.018 [2024-12-05 17:07:05.153776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:31.018 [2024-12-05 17:07:05.153784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:20:31.018 [2024-12-05 17:07:05.153792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.018 [2024-12-05 17:07:05.163661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.018 [2024-12-05 17:07:05.163684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:31.018 [2024-12-05 17:07:05.163694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.852 ms 00:20:31.018 [2024-12-05 17:07:05.163701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.018 [2024-12-05 17:07:05.163996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.018 [2024-12-05 17:07:05.164007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:31.018 [2024-12-05 17:07:05.164015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:20:31.018 [2024-12-05 17:07:05.164021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.018 [2024-12-05 17:07:05.199267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.018 [2024-12-05 17:07:05.199292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:31.018 [2024-12-05 17:07:05.199302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.018 [2024-12-05 17:07:05.199309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.018 [2024-12-05 17:07:05.200263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.018 [2024-12-05 17:07:05.200320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:31.018 [2024-12-05 17:07:05.200332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.018 [2024-12-05 17:07:05.200337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.018 [2024-12-05 17:07:05.200376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.018 [2024-12-05 17:07:05.200383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:31.018 [2024-12-05 17:07:05.200392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.018 [2024-12-05 17:07:05.200397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.018 [2024-12-05 17:07:05.200412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.018 [2024-12-05 17:07:05.200418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:31.018 [2024-12-05 17:07:05.200425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.018 [2024-12-05 17:07:05.200432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.018 [2024-12-05 17:07:05.261157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.018 [2024-12-05 17:07:05.261187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:31.018 [2024-12-05 17:07:05.261197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.018 [2024-12-05 17:07:05.261203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.018 [2024-12-05 17:07:05.310671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.018 [2024-12-05 17:07:05.310704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:31.018 [2024-12-05 17:07:05.310716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.018 [2024-12-05 17:07:05.310722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.018 [2024-12-05 17:07:05.310779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.018 [2024-12-05 17:07:05.310787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:31.018 [2024-12-05 17:07:05.310796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.018 [2024-12-05 17:07:05.310802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.018 [2024-12-05 17:07:05.310825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.018 [2024-12-05 17:07:05.310832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:31.018 [2024-12-05 17:07:05.310840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.018 [2024-12-05 17:07:05.310845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.018 [2024-12-05 17:07:05.310915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.018 [2024-12-05 17:07:05.310924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:31.018 [2024-12-05 17:07:05.310931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.018 [2024-12-05 17:07:05.310937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.018 [2024-12-05 17:07:05.310980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.018 [2024-12-05 17:07:05.310988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:31.018 [2024-12-05 17:07:05.310995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.018 [2024-12-05 17:07:05.311015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.018 [2024-12-05 17:07:05.311047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.018 [2024-12-05 17:07:05.311055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:31.018 [2024-12-05 17:07:05.311065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.018 [2024-12-05 17:07:05.311071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.018 [2024-12-05 17:07:05.311105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:31.018 [2024-12-05 17:07:05.311113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:31.018 [2024-12-05 17:07:05.311120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:31.018 [2024-12-05 17:07:05.311126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.018 [2024-12-05 17:07:05.311231] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 215.978 ms, result 0 00:20:31.586 17:07:05 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:31.586 17:07:05 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:31.586 [2024-12-05 17:07:05.889054] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:20:31.586 [2024-12-05 17:07:05.889172] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76719 ] 00:20:31.845 [2024-12-05 17:07:06.046319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.845 [2024-12-05 17:07:06.124498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.104 [2024-12-05 17:07:06.336046] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:32.104 [2024-12-05 17:07:06.336098] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:32.365 [2024-12-05 17:07:06.490347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.365 [2024-12-05 17:07:06.490382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:32.365 [2024-12-05 17:07:06.490392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:32.365 [2024-12-05 17:07:06.490399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.365 [2024-12-05 17:07:06.492524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.365 [2024-12-05 17:07:06.492554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:32.365 [2024-12-05 17:07:06.492562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.113 ms 00:20:32.365 [2024-12-05 17:07:06.492568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.365 [2024-12-05 17:07:06.492626] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:32.366 [2024-12-05 17:07:06.493569] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:32.366 [2024-12-05 17:07:06.493607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.366 [2024-12-05 17:07:06.493615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:32.366 [2024-12-05 17:07:06.493623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.986 ms 00:20:32.366 [2024-12-05 17:07:06.493629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.366 [2024-12-05 17:07:06.494660] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:32.366 [2024-12-05 17:07:06.504504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.366 [2024-12-05 17:07:06.504531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:32.366 [2024-12-05 17:07:06.504540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.845 ms 00:20:32.366 [2024-12-05 17:07:06.504546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.366 [2024-12-05 17:07:06.504616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.366 [2024-12-05 17:07:06.504625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:32.366 [2024-12-05 17:07:06.504632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:32.366 [2024-12-05 17:07:06.504638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.366 [2024-12-05 17:07:06.509187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.366 [2024-12-05 17:07:06.509211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:32.366 [2024-12-05 17:07:06.509219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.521 ms 00:20:32.366 [2024-12-05 17:07:06.509225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.366 [2024-12-05 17:07:06.509300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.366 [2024-12-05 17:07:06.509307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:32.366 [2024-12-05 17:07:06.509314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:32.366 [2024-12-05 17:07:06.509320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.366 [2024-12-05 17:07:06.509338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.366 [2024-12-05 17:07:06.509344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:32.366 [2024-12-05 17:07:06.509350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:32.366 [2024-12-05 17:07:06.509356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.366 [2024-12-05 17:07:06.509375] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:32.366 [2024-12-05 17:07:06.512101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.366 [2024-12-05 17:07:06.512123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:32.366 [2024-12-05 17:07:06.512131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.731 ms 00:20:32.366 [2024-12-05 17:07:06.512137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.366 [2024-12-05 17:07:06.512166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.366 [2024-12-05 17:07:06.512173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:32.366 [2024-12-05 17:07:06.512179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:32.366 [2024-12-05 17:07:06.512185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.366 [2024-12-05 17:07:06.512199] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:32.366 [2024-12-05 17:07:06.512214] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:32.366 [2024-12-05 17:07:06.512240] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:32.366 [2024-12-05 17:07:06.512252] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:32.366 [2024-12-05 17:07:06.512330] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:32.366 [2024-12-05 17:07:06.512338] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:32.366 [2024-12-05 17:07:06.512346] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:32.366 [2024-12-05 17:07:06.512355] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:32.366 [2024-12-05 17:07:06.512362] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:32.366 [2024-12-05 17:07:06.512368] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:32.366 [2024-12-05 17:07:06.512374] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:32.366 [2024-12-05 17:07:06.512380] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:32.366 [2024-12-05 17:07:06.512386] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:32.366 [2024-12-05 17:07:06.512391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.366 [2024-12-05 17:07:06.512397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:32.366 [2024-12-05 17:07:06.512403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:20:32.366 [2024-12-05 17:07:06.512408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.366 [2024-12-05 17:07:06.512476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.366 [2024-12-05 17:07:06.512484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:32.366 [2024-12-05 17:07:06.512490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:32.366 [2024-12-05 17:07:06.512495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.366 [2024-12-05 17:07:06.512569] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:32.366 [2024-12-05 17:07:06.512578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:32.366 [2024-12-05 17:07:06.512584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:32.366 [2024-12-05 17:07:06.512590] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.366 [2024-12-05 17:07:06.512596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:32.366 [2024-12-05 17:07:06.512602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:32.366 [2024-12-05 17:07:06.512608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:32.366 [2024-12-05 17:07:06.512614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:32.366 [2024-12-05 17:07:06.512619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:32.366 [2024-12-05 17:07:06.512624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:32.366 [2024-12-05 17:07:06.512630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:32.366 [2024-12-05 17:07:06.512639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:32.366 [2024-12-05 17:07:06.512645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:32.366 [2024-12-05 17:07:06.512650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:32.366 [2024-12-05 17:07:06.512655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:32.366 [2024-12-05 17:07:06.512661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.366 [2024-12-05 17:07:06.512666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:32.366 [2024-12-05 17:07:06.512671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:32.366 [2024-12-05 17:07:06.512676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.366 [2024-12-05 17:07:06.512699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:32.366 [2024-12-05 17:07:06.512705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:32.366 [2024-12-05 17:07:06.512710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:32.366 [2024-12-05 17:07:06.512716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:32.366 [2024-12-05 17:07:06.512721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:32.366 [2024-12-05 17:07:06.512727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:32.366 [2024-12-05 17:07:06.512732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:32.366 [2024-12-05 17:07:06.512737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:32.366 [2024-12-05 17:07:06.512742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:32.366 [2024-12-05 17:07:06.512747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:32.366 [2024-12-05 17:07:06.512752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:32.366 [2024-12-05 17:07:06.512757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:32.366 [2024-12-05 17:07:06.512762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:32.366 [2024-12-05 17:07:06.512768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:32.366 [2024-12-05 17:07:06.512773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:32.366 [2024-12-05 17:07:06.512778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:32.366 [2024-12-05 17:07:06.512783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:32.366 [2024-12-05 17:07:06.512789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:32.366 [2024-12-05 17:07:06.512794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:32.366 [2024-12-05 17:07:06.512798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:32.366 [2024-12-05 17:07:06.512803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.366 [2024-12-05 17:07:06.512808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:32.366 [2024-12-05 17:07:06.512813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:32.366 [2024-12-05 17:07:06.512821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.366 [2024-12-05 17:07:06.512827] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:32.367 [2024-12-05 17:07:06.512833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:32.367 [2024-12-05 17:07:06.512841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:32.367 [2024-12-05 17:07:06.512846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:32.367 [2024-12-05 17:07:06.512852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:32.367 [2024-12-05 17:07:06.512857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:32.367 [2024-12-05 17:07:06.512862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:32.367 [2024-12-05 17:07:06.512867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:32.367 [2024-12-05 17:07:06.512872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:32.367 [2024-12-05 17:07:06.512877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:32.367 [2024-12-05 17:07:06.512884] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:32.367 [2024-12-05 17:07:06.512891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:32.367 [2024-12-05 17:07:06.512897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:32.367 [2024-12-05 17:07:06.512903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:32.367 [2024-12-05 17:07:06.512908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:32.367 [2024-12-05 17:07:06.512913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:32.367 [2024-12-05 17:07:06.512918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:32.367 [2024-12-05 17:07:06.512924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:32.367 [2024-12-05 17:07:06.512930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:32.367 [2024-12-05 17:07:06.512935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:32.367 [2024-12-05 17:07:06.512940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:32.367 [2024-12-05 17:07:06.512945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:32.367 [2024-12-05 17:07:06.512965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:32.367 [2024-12-05 17:07:06.512970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:32.367 [2024-12-05 17:07:06.512976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:32.367 [2024-12-05 17:07:06.512982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:32.367 [2024-12-05 17:07:06.512987] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:32.367 [2024-12-05 17:07:06.512994] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:32.367 [2024-12-05 17:07:06.513002] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:32.367 [2024-12-05 17:07:06.513007] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:32.367 [2024-12-05 17:07:06.513013] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:32.367 [2024-12-05 17:07:06.513020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:32.367 [2024-12-05 17:07:06.513027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.367 [2024-12-05 17:07:06.513035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:32.367 [2024-12-05 17:07:06.513041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:20:32.367 [2024-12-05 17:07:06.513047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.367 [2024-12-05 17:07:06.533929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.367 [2024-12-05 17:07:06.533964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:32.367 [2024-12-05 17:07:06.533972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.840 ms 00:20:32.367 [2024-12-05 17:07:06.533978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.367 [2024-12-05 17:07:06.534072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.367 [2024-12-05 17:07:06.534079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:32.367 [2024-12-05 17:07:06.534086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:32.367 [2024-12-05 17:07:06.534092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.367 [2024-12-05 17:07:06.577995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.367 [2024-12-05 17:07:06.578117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:32.367 [2024-12-05 17:07:06.578135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.886 ms 00:20:32.367 [2024-12-05 17:07:06.578141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.367 [2024-12-05 17:07:06.578202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.367 [2024-12-05 17:07:06.578211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:32.367 [2024-12-05 17:07:06.578217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:32.367 [2024-12-05 17:07:06.578223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.367 [2024-12-05 17:07:06.578500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.367 [2024-12-05 17:07:06.578512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:32.367 [2024-12-05 17:07:06.578520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:20:32.367 [2024-12-05 17:07:06.578531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.367 [2024-12-05 17:07:06.578635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.367 [2024-12-05 17:07:06.578643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:32.367 [2024-12-05 17:07:06.578649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:20:32.367 [2024-12-05 17:07:06.578656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.367 [2024-12-05 17:07:06.589562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.367 [2024-12-05 17:07:06.589673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:32.367 [2024-12-05 17:07:06.589685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.890 ms 00:20:32.367 [2024-12-05 17:07:06.589692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.367 [2024-12-05 17:07:06.599945] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:32.367 [2024-12-05 17:07:06.599977] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:32.367 [2024-12-05 17:07:06.599998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.367 [2024-12-05 17:07:06.600005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:32.367 [2024-12-05 17:07:06.600011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.215 ms 00:20:32.367 [2024-12-05 17:07:06.600017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.367 [2024-12-05 17:07:06.618550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.367 [2024-12-05 17:07:06.618578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:32.367 [2024-12-05 17:07:06.618587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.475 ms 00:20:32.367 [2024-12-05 17:07:06.618594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.367 [2024-12-05 17:07:06.627622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.367 [2024-12-05 17:07:06.627647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:32.367 [2024-12-05 17:07:06.627655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.972 ms 00:20:32.367 [2024-12-05 17:07:06.627660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.367 [2024-12-05 17:07:06.636368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.367 [2024-12-05 17:07:06.636393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:32.367 [2024-12-05 17:07:06.636401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.666 ms 00:20:32.367 [2024-12-05 17:07:06.636407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.367 [2024-12-05 17:07:06.636876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.367 [2024-12-05 17:07:06.636887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:32.367 [2024-12-05 17:07:06.636894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:20:32.367 [2024-12-05 17:07:06.636899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.367 [2024-12-05 17:07:06.682603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.367 [2024-12-05 17:07:06.682748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:32.367 [2024-12-05 17:07:06.682763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.685 ms 00:20:32.367 [2024-12-05 17:07:06.682769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.367 [2024-12-05 17:07:06.690461] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:32.367 [2024-12-05 17:07:06.702070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.367 [2024-12-05 17:07:06.702098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:32.367 [2024-12-05 17:07:06.702109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.244 ms 00:20:32.367 [2024-12-05 17:07:06.702118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.367 [2024-12-05 17:07:06.702186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.367 [2024-12-05 17:07:06.702195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:32.368 [2024-12-05 17:07:06.702202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:32.368 [2024-12-05 17:07:06.702208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.368 [2024-12-05 17:07:06.702245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.368 [2024-12-05 17:07:06.702252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:32.368 [2024-12-05 17:07:06.702259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:32.368 [2024-12-05 17:07:06.702266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.368 [2024-12-05 17:07:06.702290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.368 [2024-12-05 17:07:06.702298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:32.368 [2024-12-05 17:07:06.702304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:32.368 [2024-12-05 17:07:06.702310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.368 [2024-12-05 17:07:06.702333] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:32.368 [2024-12-05 17:07:06.702340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.368 [2024-12-05 17:07:06.702346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:32.368 [2024-12-05 17:07:06.702352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:32.368 [2024-12-05 17:07:06.702358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.368 [2024-12-05 17:07:06.720216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.368 [2024-12-05 17:07:06.720244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:32.368 [2024-12-05 17:07:06.720253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.844 ms 00:20:32.368 [2024-12-05 17:07:06.720259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.368 [2024-12-05 17:07:06.720327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:32.368 [2024-12-05 17:07:06.720335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:32.368 [2024-12-05 17:07:06.720342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:32.368 [2024-12-05 17:07:06.720348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:32.368 [2024-12-05 17:07:06.721001] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:32.368 [2024-12-05 17:07:06.723303] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 230.423 ms, result 0 00:20:32.368 [2024-12-05 17:07:06.724492] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:32.629 [2024-12-05 17:07:06.735170] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:33.571  [2024-12-05T17:07:08.881Z] Copying: 25/256 [MB] (25 MBps) [2024-12-05T17:07:09.826Z] Copying: 35/256 [MB] (10 MBps) [2024-12-05T17:07:10.769Z] Copying: 46/256 [MB] (10 MBps) [2024-12-05T17:07:12.158Z] Copying: 60/256 [MB] (14 MBps) [2024-12-05T17:07:13.102Z] Copying: 77/256 [MB] (16 MBps) [2024-12-05T17:07:14.046Z] Copying: 92/256 [MB] (15 MBps) [2024-12-05T17:07:14.991Z] Copying: 106/256 [MB] (13 MBps) [2024-12-05T17:07:15.932Z] Copying: 121/256 [MB] (14 MBps) [2024-12-05T17:07:16.875Z] Copying: 132/256 [MB] (11 MBps) [2024-12-05T17:07:17.820Z] Copying: 150/256 [MB] (17 MBps) [2024-12-05T17:07:18.821Z] Copying: 166/256 [MB] (16 MBps) [2024-12-05T17:07:19.778Z] Copying: 186/256 [MB] (19 MBps) [2024-12-05T17:07:21.168Z] Copying: 197/256 [MB] (11 MBps) [2024-12-05T17:07:22.108Z] Copying: 210/256 [MB] (12 MBps) [2024-12-05T17:07:23.048Z] Copying: 221/256 [MB] (10 MBps) [2024-12-05T17:07:23.617Z] Copying: 238/256 [MB] (17 MBps) [2024-12-05T17:07:23.617Z] Copying: 256/256 [MB] (average 15 MBps)[2024-12-05 17:07:23.600296] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:49.250 [2024-12-05 17:07:23.610657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.250 [2024-12-05 17:07:23.610708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:49.250 [2024-12-05 17:07:23.610734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:49.250 [2024-12-05 17:07:23.610743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.250 [2024-12-05 17:07:23.610769] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:49.250 [2024-12-05 17:07:23.613858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.250 [2024-12-05 17:07:23.613900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:49.250 [2024-12-05 17:07:23.613913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.072 ms 00:20:49.250 [2024-12-05 17:07:23.613922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.250 [2024-12-05 17:07:23.614202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.250 [2024-12-05 17:07:23.614216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:49.250 [2024-12-05 17:07:23.614226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:20:49.250 [2024-12-05 17:07:23.614234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.511 [2024-12-05 17:07:23.617959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.511 [2024-12-05 17:07:23.617986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:49.511 [2024-12-05 17:07:23.617996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.706 ms 00:20:49.511 [2024-12-05 17:07:23.618004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.511 [2024-12-05 17:07:23.624974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.511 [2024-12-05 17:07:23.625177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:49.511 [2024-12-05 17:07:23.625198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.953 ms 00:20:49.511 [2024-12-05 17:07:23.625207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.511 [2024-12-05 17:07:23.651156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.511 [2024-12-05 17:07:23.651204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:49.511 [2024-12-05 17:07:23.651217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.878 ms 00:20:49.511 [2024-12-05 17:07:23.651224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.511 [2024-12-05 17:07:23.668454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.511 [2024-12-05 17:07:23.668659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:49.511 [2024-12-05 17:07:23.668700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.179 ms 00:20:49.511 [2024-12-05 17:07:23.668709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.511 [2024-12-05 17:07:23.669017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.511 [2024-12-05 17:07:23.669033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:49.511 [2024-12-05 17:07:23.669053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:20:49.511 [2024-12-05 17:07:23.669061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.511 [2024-12-05 17:07:23.695140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.511 [2024-12-05 17:07:23.695326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:49.511 [2024-12-05 17:07:23.695346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.061 ms 00:20:49.511 [2024-12-05 17:07:23.695355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.511 [2024-12-05 17:07:23.720470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.511 [2024-12-05 17:07:23.720514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:49.511 [2024-12-05 17:07:23.720527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.059 ms 00:20:49.511 [2024-12-05 17:07:23.720534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.511 [2024-12-05 17:07:23.745567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.511 [2024-12-05 17:07:23.745611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:49.511 [2024-12-05 17:07:23.745623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.982 ms 00:20:49.511 [2024-12-05 17:07:23.745631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.511 [2024-12-05 17:07:23.770597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.511 [2024-12-05 17:07:23.770644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:49.512 [2024-12-05 17:07:23.770657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.882 ms 00:20:49.512 [2024-12-05 17:07:23.770664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.512 [2024-12-05 17:07:23.770727] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:49.512 [2024-12-05 17:07:23.770743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.770999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:49.512 [2024-12-05 17:07:23.771470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:49.513 [2024-12-05 17:07:23.771477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:49.513 [2024-12-05 17:07:23.771485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:49.513 [2024-12-05 17:07:23.771492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:49.513 [2024-12-05 17:07:23.771499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:49.513 [2024-12-05 17:07:23.771506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:49.513 [2024-12-05 17:07:23.771526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:49.513 [2024-12-05 17:07:23.771534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:49.513 [2024-12-05 17:07:23.771542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:49.513 [2024-12-05 17:07:23.771550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:49.513 [2024-12-05 17:07:23.771558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:49.513 [2024-12-05 17:07:23.771565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:49.513 [2024-12-05 17:07:23.771572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:49.513 [2024-12-05 17:07:23.771589] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:49.513 [2024-12-05 17:07:23.771599] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c197495b-421a-4479-a175-3609e74ac63a 00:20:49.513 [2024-12-05 17:07:23.771608] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:49.513 [2024-12-05 17:07:23.771615] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:49.513 [2024-12-05 17:07:23.771623] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:49.513 [2024-12-05 17:07:23.771632] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:49.513 [2024-12-05 17:07:23.771639] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:49.513 [2024-12-05 17:07:23.771647] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:49.513 [2024-12-05 17:07:23.771660] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:49.513 [2024-12-05 17:07:23.771667] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:49.513 [2024-12-05 17:07:23.771673] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:49.513 [2024-12-05 17:07:23.771681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.513 [2024-12-05 17:07:23.771689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:49.513 [2024-12-05 17:07:23.771697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:20:49.513 [2024-12-05 17:07:23.771705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.513 [2024-12-05 17:07:23.785554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.513 [2024-12-05 17:07:23.785736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:49.513 [2024-12-05 17:07:23.785753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.829 ms 00:20:49.513 [2024-12-05 17:07:23.785762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.513 [2024-12-05 17:07:23.786210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:49.513 [2024-12-05 17:07:23.786225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:49.513 [2024-12-05 17:07:23.786235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:20:49.513 [2024-12-05 17:07:23.786243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.513 [2024-12-05 17:07:23.825089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.513 [2024-12-05 17:07:23.825136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:49.513 [2024-12-05 17:07:23.825148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.513 [2024-12-05 17:07:23.825164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.513 [2024-12-05 17:07:23.825264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.513 [2024-12-05 17:07:23.825274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:49.513 [2024-12-05 17:07:23.825283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.513 [2024-12-05 17:07:23.825292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.513 [2024-12-05 17:07:23.825342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.513 [2024-12-05 17:07:23.825354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:49.513 [2024-12-05 17:07:23.825362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.513 [2024-12-05 17:07:23.825370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.513 [2024-12-05 17:07:23.825390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.513 [2024-12-05 17:07:23.825399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:49.513 [2024-12-05 17:07:23.825407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.513 [2024-12-05 17:07:23.825415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.774 [2024-12-05 17:07:23.910623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.774 [2024-12-05 17:07:23.910678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:49.774 [2024-12-05 17:07:23.910692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.774 [2024-12-05 17:07:23.910700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.774 [2024-12-05 17:07:23.981716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.774 [2024-12-05 17:07:23.981764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:49.774 [2024-12-05 17:07:23.981777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.774 [2024-12-05 17:07:23.981787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.774 [2024-12-05 17:07:23.981863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.774 [2024-12-05 17:07:23.981875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:49.774 [2024-12-05 17:07:23.981884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.774 [2024-12-05 17:07:23.981893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.774 [2024-12-05 17:07:23.981925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.774 [2024-12-05 17:07:23.981941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:49.774 [2024-12-05 17:07:23.981974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.774 [2024-12-05 17:07:23.981984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.774 [2024-12-05 17:07:23.982091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.774 [2024-12-05 17:07:23.982104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:49.774 [2024-12-05 17:07:23.982112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.774 [2024-12-05 17:07:23.982121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.774 [2024-12-05 17:07:23.982161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.774 [2024-12-05 17:07:23.982172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:49.774 [2024-12-05 17:07:23.982185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.774 [2024-12-05 17:07:23.982193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.774 [2024-12-05 17:07:23.982239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.774 [2024-12-05 17:07:23.982251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:49.774 [2024-12-05 17:07:23.982260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.774 [2024-12-05 17:07:23.982268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.774 [2024-12-05 17:07:23.982318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:49.774 [2024-12-05 17:07:23.982332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:49.774 [2024-12-05 17:07:23.982341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:49.774 [2024-12-05 17:07:23.982350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:49.774 [2024-12-05 17:07:23.982506] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 371.850 ms, result 0 00:20:50.716 00:20:50.716 00:20:50.716 17:07:24 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:50.716 17:07:24 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:50.975 17:07:25 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:51.235 [2024-12-05 17:07:25.381722] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:20:51.235 [2024-12-05 17:07:25.381868] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76923 ] 00:20:51.235 [2024-12-05 17:07:25.546394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.495 [2024-12-05 17:07:25.656909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.755 [2024-12-05 17:07:25.950536] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:51.755 [2024-12-05 17:07:25.950625] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:51.755 [2024-12-05 17:07:26.113072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.755 [2024-12-05 17:07:26.113130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:51.755 [2024-12-05 17:07:26.113145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:51.755 [2024-12-05 17:07:26.113155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.755 [2024-12-05 17:07:26.116241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.755 [2024-12-05 17:07:26.116481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:51.755 [2024-12-05 17:07:26.116504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.064 ms 00:20:51.755 [2024-12-05 17:07:26.116513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.755 [2024-12-05 17:07:26.116732] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:51.755 [2024-12-05 17:07:26.117560] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:51.755 [2024-12-05 17:07:26.117597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.755 [2024-12-05 17:07:26.117607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:51.755 [2024-12-05 17:07:26.117617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.876 ms 00:20:51.755 [2024-12-05 17:07:26.117625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.755 [2024-12-05 17:07:26.119387] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:52.016 [2024-12-05 17:07:26.133810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.016 [2024-12-05 17:07:26.133874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:52.016 [2024-12-05 17:07:26.133888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.425 ms 00:20:52.016 [2024-12-05 17:07:26.133897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.017 [2024-12-05 17:07:26.134039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.017 [2024-12-05 17:07:26.134053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:52.017 [2024-12-05 17:07:26.134064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:52.017 [2024-12-05 17:07:26.134073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.017 [2024-12-05 17:07:26.142175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.017 [2024-12-05 17:07:26.142216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:52.017 [2024-12-05 17:07:26.142227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.034 ms 00:20:52.017 [2024-12-05 17:07:26.142235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.017 [2024-12-05 17:07:26.142341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.017 [2024-12-05 17:07:26.142352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:52.017 [2024-12-05 17:07:26.142362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:20:52.017 [2024-12-05 17:07:26.142371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.017 [2024-12-05 17:07:26.142400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.017 [2024-12-05 17:07:26.142411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:52.017 [2024-12-05 17:07:26.142419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:52.017 [2024-12-05 17:07:26.142427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.017 [2024-12-05 17:07:26.142448] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:52.017 [2024-12-05 17:07:26.146482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.017 [2024-12-05 17:07:26.146521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:52.017 [2024-12-05 17:07:26.146532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.038 ms 00:20:52.017 [2024-12-05 17:07:26.146541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.017 [2024-12-05 17:07:26.146616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.017 [2024-12-05 17:07:26.146627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:52.017 [2024-12-05 17:07:26.146637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:52.017 [2024-12-05 17:07:26.146645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.017 [2024-12-05 17:07:26.146670] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:52.017 [2024-12-05 17:07:26.146693] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:52.017 [2024-12-05 17:07:26.146731] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:52.017 [2024-12-05 17:07:26.146748] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:52.017 [2024-12-05 17:07:26.146854] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:52.017 [2024-12-05 17:07:26.146867] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:52.017 [2024-12-05 17:07:26.146879] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:52.017 [2024-12-05 17:07:26.146894] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:52.017 [2024-12-05 17:07:26.146903] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:52.017 [2024-12-05 17:07:26.146912] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:52.017 [2024-12-05 17:07:26.146920] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:52.017 [2024-12-05 17:07:26.146928] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:52.017 [2024-12-05 17:07:26.146937] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:52.017 [2024-12-05 17:07:26.146977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.017 [2024-12-05 17:07:26.146987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:52.017 [2024-12-05 17:07:26.146995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:20:52.017 [2024-12-05 17:07:26.147004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.017 [2024-12-05 17:07:26.147093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.017 [2024-12-05 17:07:26.147107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:52.017 [2024-12-05 17:07:26.147116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:52.017 [2024-12-05 17:07:26.147124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.017 [2024-12-05 17:07:26.147229] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:52.017 [2024-12-05 17:07:26.147242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:52.017 [2024-12-05 17:07:26.147252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:52.017 [2024-12-05 17:07:26.147260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.017 [2024-12-05 17:07:26.147269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:52.017 [2024-12-05 17:07:26.147276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:52.017 [2024-12-05 17:07:26.147285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:52.017 [2024-12-05 17:07:26.147295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:52.017 [2024-12-05 17:07:26.147302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:52.017 [2024-12-05 17:07:26.147310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:52.017 [2024-12-05 17:07:26.147317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:52.017 [2024-12-05 17:07:26.147333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:52.017 [2024-12-05 17:07:26.147341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:52.017 [2024-12-05 17:07:26.147348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:52.017 [2024-12-05 17:07:26.147355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:52.017 [2024-12-05 17:07:26.147363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.017 [2024-12-05 17:07:26.147372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:52.017 [2024-12-05 17:07:26.147379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:52.017 [2024-12-05 17:07:26.147387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.017 [2024-12-05 17:07:26.147396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:52.017 [2024-12-05 17:07:26.147403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:52.017 [2024-12-05 17:07:26.147410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:52.017 [2024-12-05 17:07:26.147417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:52.017 [2024-12-05 17:07:26.147425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:52.017 [2024-12-05 17:07:26.147431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:52.017 [2024-12-05 17:07:26.147439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:52.017 [2024-12-05 17:07:26.147447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:52.017 [2024-12-05 17:07:26.147454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:52.017 [2024-12-05 17:07:26.147461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:52.017 [2024-12-05 17:07:26.147468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:52.017 [2024-12-05 17:07:26.147475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:52.017 [2024-12-05 17:07:26.147481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:52.017 [2024-12-05 17:07:26.147491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:52.017 [2024-12-05 17:07:26.147499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:52.017 [2024-12-05 17:07:26.147505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:52.017 [2024-12-05 17:07:26.147512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:52.017 [2024-12-05 17:07:26.147518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:52.017 [2024-12-05 17:07:26.147525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:52.017 [2024-12-05 17:07:26.147535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:52.017 [2024-12-05 17:07:26.147543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.017 [2024-12-05 17:07:26.147550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:52.017 [2024-12-05 17:07:26.147557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:52.017 [2024-12-05 17:07:26.147564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.017 [2024-12-05 17:07:26.147570] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:52.017 [2024-12-05 17:07:26.147579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:52.017 [2024-12-05 17:07:26.147590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:52.017 [2024-12-05 17:07:26.147597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:52.017 [2024-12-05 17:07:26.147604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:52.017 [2024-12-05 17:07:26.147617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:52.017 [2024-12-05 17:07:26.147625] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:52.017 [2024-12-05 17:07:26.147632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:52.017 [2024-12-05 17:07:26.147639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:52.017 [2024-12-05 17:07:26.147646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:52.017 [2024-12-05 17:07:26.147655] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:52.017 [2024-12-05 17:07:26.147664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:52.017 [2024-12-05 17:07:26.147674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:52.017 [2024-12-05 17:07:26.147682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:52.018 [2024-12-05 17:07:26.147690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:52.018 [2024-12-05 17:07:26.147697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:52.018 [2024-12-05 17:07:26.147704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:52.018 [2024-12-05 17:07:26.147712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:52.018 [2024-12-05 17:07:26.147719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:52.018 [2024-12-05 17:07:26.147727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:52.018 [2024-12-05 17:07:26.147736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:52.018 [2024-12-05 17:07:26.147743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:52.018 [2024-12-05 17:07:26.147750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:52.018 [2024-12-05 17:07:26.147757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:52.018 [2024-12-05 17:07:26.147764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:52.018 [2024-12-05 17:07:26.147772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:52.018 [2024-12-05 17:07:26.147781] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:52.018 [2024-12-05 17:07:26.147790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:52.018 [2024-12-05 17:07:26.147798] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:52.018 [2024-12-05 17:07:26.147805] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:52.018 [2024-12-05 17:07:26.147812] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:52.018 [2024-12-05 17:07:26.147820] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:52.018 [2024-12-05 17:07:26.147828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.018 [2024-12-05 17:07:26.147840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:52.018 [2024-12-05 17:07:26.147850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.668 ms 00:20:52.018 [2024-12-05 17:07:26.147857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.018 [2024-12-05 17:07:26.179758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.018 [2024-12-05 17:07:26.179810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:52.018 [2024-12-05 17:07:26.179824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.842 ms 00:20:52.018 [2024-12-05 17:07:26.179832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.018 [2024-12-05 17:07:26.180002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.018 [2024-12-05 17:07:26.180017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:52.018 [2024-12-05 17:07:26.180027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:20:52.018 [2024-12-05 17:07:26.180035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.018 [2024-12-05 17:07:26.221316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.018 [2024-12-05 17:07:26.221535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:52.018 [2024-12-05 17:07:26.221564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.256 ms 00:20:52.018 [2024-12-05 17:07:26.221574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.018 [2024-12-05 17:07:26.221690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.018 [2024-12-05 17:07:26.221703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:52.018 [2024-12-05 17:07:26.221712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:52.018 [2024-12-05 17:07:26.221722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.018 [2024-12-05 17:07:26.222287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.018 [2024-12-05 17:07:26.222312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:52.018 [2024-12-05 17:07:26.222331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:20:52.018 [2024-12-05 17:07:26.222339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.018 [2024-12-05 17:07:26.222499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.018 [2024-12-05 17:07:26.222511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:52.018 [2024-12-05 17:07:26.222520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:20:52.018 [2024-12-05 17:07:26.222529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.018 [2024-12-05 17:07:26.239198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.018 [2024-12-05 17:07:26.239242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:52.018 [2024-12-05 17:07:26.239253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.644 ms 00:20:52.018 [2024-12-05 17:07:26.239262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.018 [2024-12-05 17:07:26.253584] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:52.018 [2024-12-05 17:07:26.253635] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:52.018 [2024-12-05 17:07:26.253650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.018 [2024-12-05 17:07:26.253658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:52.018 [2024-12-05 17:07:26.253669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.274 ms 00:20:52.018 [2024-12-05 17:07:26.253677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.018 [2024-12-05 17:07:26.279607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.018 [2024-12-05 17:07:26.279657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:52.018 [2024-12-05 17:07:26.279670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.834 ms 00:20:52.018 [2024-12-05 17:07:26.279679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.018 [2024-12-05 17:07:26.292788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.018 [2024-12-05 17:07:26.292833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:52.018 [2024-12-05 17:07:26.292847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.013 ms 00:20:52.018 [2024-12-05 17:07:26.292856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.018 [2024-12-05 17:07:26.305545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.018 [2024-12-05 17:07:26.305589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:52.018 [2024-12-05 17:07:26.305601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.600 ms 00:20:52.018 [2024-12-05 17:07:26.305609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.018 [2024-12-05 17:07:26.306303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.018 [2024-12-05 17:07:26.306331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:52.018 [2024-12-05 17:07:26.306342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:20:52.018 [2024-12-05 17:07:26.306350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.018 [2024-12-05 17:07:26.373608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.018 [2024-12-05 17:07:26.373664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:52.018 [2024-12-05 17:07:26.373680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.230 ms 00:20:52.018 [2024-12-05 17:07:26.373689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.277 [2024-12-05 17:07:26.385239] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:52.277 [2024-12-05 17:07:26.405288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.277 [2024-12-05 17:07:26.405338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:52.277 [2024-12-05 17:07:26.405351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.493 ms 00:20:52.277 [2024-12-05 17:07:26.405366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.277 [2024-12-05 17:07:26.405461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.277 [2024-12-05 17:07:26.405472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:52.277 [2024-12-05 17:07:26.405486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:52.277 [2024-12-05 17:07:26.405496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.277 [2024-12-05 17:07:26.405553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.277 [2024-12-05 17:07:26.405564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:52.277 [2024-12-05 17:07:26.405573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:52.277 [2024-12-05 17:07:26.405585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.277 [2024-12-05 17:07:26.405616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.277 [2024-12-05 17:07:26.405625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:52.277 [2024-12-05 17:07:26.405634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:52.277 [2024-12-05 17:07:26.405642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.277 [2024-12-05 17:07:26.405681] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:52.277 [2024-12-05 17:07:26.405691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.277 [2024-12-05 17:07:26.405700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:52.277 [2024-12-05 17:07:26.405708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:52.277 [2024-12-05 17:07:26.405715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.277 [2024-12-05 17:07:26.432331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.277 [2024-12-05 17:07:26.432383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:52.277 [2024-12-05 17:07:26.432399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.595 ms 00:20:52.277 [2024-12-05 17:07:26.432408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.277 [2024-12-05 17:07:26.432540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.277 [2024-12-05 17:07:26.432554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:52.277 [2024-12-05 17:07:26.432564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:52.277 [2024-12-05 17:07:26.432572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.277 [2024-12-05 17:07:26.433695] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:52.277 [2024-12-05 17:07:26.437223] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 320.299 ms, result 0 00:20:52.277 [2024-12-05 17:07:26.438750] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:52.277 [2024-12-05 17:07:26.452396] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:52.537  [2024-12-05T17:07:26.904Z] Copying: 4096/4096 [kB] (average 12 MBps)[2024-12-05 17:07:26.786824] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:52.537 [2024-12-05 17:07:26.795933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.537 [2024-12-05 17:07:26.796002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:52.537 [2024-12-05 17:07:26.796025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:52.537 [2024-12-05 17:07:26.796034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.537 [2024-12-05 17:07:26.796058] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:52.537 [2024-12-05 17:07:26.799076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.537 [2024-12-05 17:07:26.799116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:52.537 [2024-12-05 17:07:26.799129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.004 ms 00:20:52.537 [2024-12-05 17:07:26.799138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.537 [2024-12-05 17:07:26.802481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.537 [2024-12-05 17:07:26.802528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:52.537 [2024-12-05 17:07:26.802540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.316 ms 00:20:52.537 [2024-12-05 17:07:26.802548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.537 [2024-12-05 17:07:26.807053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.537 [2024-12-05 17:07:26.807091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:52.537 [2024-12-05 17:07:26.807102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.483 ms 00:20:52.537 [2024-12-05 17:07:26.807110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.537 [2024-12-05 17:07:26.814061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.537 [2024-12-05 17:07:26.814102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:52.537 [2024-12-05 17:07:26.814113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.917 ms 00:20:52.537 [2024-12-05 17:07:26.814121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.537 [2024-12-05 17:07:26.839652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.537 [2024-12-05 17:07:26.839699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:52.537 [2024-12-05 17:07:26.839711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.478 ms 00:20:52.537 [2024-12-05 17:07:26.839719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.537 [2024-12-05 17:07:26.856532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.537 [2024-12-05 17:07:26.856584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:52.537 [2024-12-05 17:07:26.856597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.749 ms 00:20:52.537 [2024-12-05 17:07:26.856605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.537 [2024-12-05 17:07:26.856775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.537 [2024-12-05 17:07:26.856790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:52.537 [2024-12-05 17:07:26.856810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:20:52.537 [2024-12-05 17:07:26.856819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.537 [2024-12-05 17:07:26.882867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.537 [2024-12-05 17:07:26.882911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:52.537 [2024-12-05 17:07:26.882923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.031 ms 00:20:52.537 [2024-12-05 17:07:26.882931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.798 [2024-12-05 17:07:26.908702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.798 [2024-12-05 17:07:26.908889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:52.798 [2024-12-05 17:07:26.908909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.665 ms 00:20:52.798 [2024-12-05 17:07:26.908916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.798 [2024-12-05 17:07:26.934215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.798 [2024-12-05 17:07:26.934260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:52.798 [2024-12-05 17:07:26.934272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.173 ms 00:20:52.798 [2024-12-05 17:07:26.934279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.798 [2024-12-05 17:07:26.959391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.798 [2024-12-05 17:07:26.959592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:52.798 [2024-12-05 17:07:26.959613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.003 ms 00:20:52.798 [2024-12-05 17:07:26.959621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.798 [2024-12-05 17:07:26.959728] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:52.798 [2024-12-05 17:07:26.959745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:52.798 [2024-12-05 17:07:26.959756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:52.798 [2024-12-05 17:07:26.959764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:52.798 [2024-12-05 17:07:26.959771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:52.798 [2024-12-05 17:07:26.959779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:52.798 [2024-12-05 17:07:26.959788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:52.798 [2024-12-05 17:07:26.959796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:52.798 [2024-12-05 17:07:26.959805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:52.798 [2024-12-05 17:07:26.959813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:52.798 [2024-12-05 17:07:26.959823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:52.798 [2024-12-05 17:07:26.959831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:52.798 [2024-12-05 17:07:26.959838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:52.798 [2024-12-05 17:07:26.959846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:52.798 [2024-12-05 17:07:26.959853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:52.798 [2024-12-05 17:07:26.959861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:52.798 [2024-12-05 17:07:26.959868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:52.798 [2024-12-05 17:07:26.959876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:52.798 [2024-12-05 17:07:26.959883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.959890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.959897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.959905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.959912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.959920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.959927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.959935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.959943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.959974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.959983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.959991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:52.799 [2024-12-05 17:07:26.960584] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:52.799 [2024-12-05 17:07:26.960593] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c197495b-421a-4479-a175-3609e74ac63a 00:20:52.799 [2024-12-05 17:07:26.960602] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:52.799 [2024-12-05 17:07:26.960610] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:52.799 [2024-12-05 17:07:26.960617] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:52.799 [2024-12-05 17:07:26.960625] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:52.799 [2024-12-05 17:07:26.960633] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:52.799 [2024-12-05 17:07:26.960640] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:52.800 [2024-12-05 17:07:26.960651] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:52.800 [2024-12-05 17:07:26.960659] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:52.800 [2024-12-05 17:07:26.960665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:52.800 [2024-12-05 17:07:26.960672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.800 [2024-12-05 17:07:26.960679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:52.800 [2024-12-05 17:07:26.960701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.944 ms 00:20:52.800 [2024-12-05 17:07:26.960709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.800 [2024-12-05 17:07:26.974152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.800 [2024-12-05 17:07:26.974307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:52.800 [2024-12-05 17:07:26.974323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.422 ms 00:20:52.800 [2024-12-05 17:07:26.974330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.800 [2024-12-05 17:07:26.974668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.800 [2024-12-05 17:07:26.974679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:52.800 [2024-12-05 17:07:26.974688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:20:52.800 [2024-12-05 17:07:26.974693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.800 [2024-12-05 17:07:27.005674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.800 [2024-12-05 17:07:27.005810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:52.800 [2024-12-05 17:07:27.005827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.800 [2024-12-05 17:07:27.005840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.800 [2024-12-05 17:07:27.005902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.800 [2024-12-05 17:07:27.005909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:52.800 [2024-12-05 17:07:27.005915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.800 [2024-12-05 17:07:27.005921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.800 [2024-12-05 17:07:27.005985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.800 [2024-12-05 17:07:27.005994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:52.800 [2024-12-05 17:07:27.006002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.800 [2024-12-05 17:07:27.006008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.800 [2024-12-05 17:07:27.006025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.800 [2024-12-05 17:07:27.006032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:52.800 [2024-12-05 17:07:27.006038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.800 [2024-12-05 17:07:27.006043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.800 [2024-12-05 17:07:27.068140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.800 [2024-12-05 17:07:27.068285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:52.800 [2024-12-05 17:07:27.068298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.800 [2024-12-05 17:07:27.068312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.800 [2024-12-05 17:07:27.116097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.800 [2024-12-05 17:07:27.116126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:52.800 [2024-12-05 17:07:27.116134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.800 [2024-12-05 17:07:27.116140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.800 [2024-12-05 17:07:27.116176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.800 [2024-12-05 17:07:27.116186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:52.800 [2024-12-05 17:07:27.116192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.800 [2024-12-05 17:07:27.116198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.800 [2024-12-05 17:07:27.116219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.800 [2024-12-05 17:07:27.116229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:52.800 [2024-12-05 17:07:27.116235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.800 [2024-12-05 17:07:27.116240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.800 [2024-12-05 17:07:27.116307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.800 [2024-12-05 17:07:27.116315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:52.800 [2024-12-05 17:07:27.116321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.800 [2024-12-05 17:07:27.116327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.800 [2024-12-05 17:07:27.116351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.800 [2024-12-05 17:07:27.116358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:52.800 [2024-12-05 17:07:27.116367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.800 [2024-12-05 17:07:27.116372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.800 [2024-12-05 17:07:27.116400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.800 [2024-12-05 17:07:27.116407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:52.800 [2024-12-05 17:07:27.116413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.800 [2024-12-05 17:07:27.116419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.800 [2024-12-05 17:07:27.116451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.800 [2024-12-05 17:07:27.116461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:52.800 [2024-12-05 17:07:27.116467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.800 [2024-12-05 17:07:27.116473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.800 [2024-12-05 17:07:27.116576] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 320.646 ms, result 0 00:20:53.372 00:20:53.372 00:20:53.372 17:07:27 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=76949 00:20:53.372 17:07:27 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 76949 00:20:53.372 17:07:27 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:53.372 17:07:27 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76949 ']' 00:20:53.372 17:07:27 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.372 17:07:27 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:53.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.372 17:07:27 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.372 17:07:27 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:53.372 17:07:27 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:53.633 [2024-12-05 17:07:27.749547] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:20:53.633 [2024-12-05 17:07:27.749831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76949 ] 00:20:53.633 [2024-12-05 17:07:27.906769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.633 [2024-12-05 17:07:27.982916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:54.575 17:07:28 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:54.575 17:07:28 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:54.575 17:07:28 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:54.575 [2024-12-05 17:07:28.738509] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:54.575 [2024-12-05 17:07:28.738562] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:54.575 [2024-12-05 17:07:28.912449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.575 [2024-12-05 17:07:28.912633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:54.575 [2024-12-05 17:07:28.912657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:54.575 [2024-12-05 17:07:28.912666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.575 [2024-12-05 17:07:28.915408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.575 [2024-12-05 17:07:28.915448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:54.575 [2024-12-05 17:07:28.915460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.710 ms 00:20:54.575 [2024-12-05 17:07:28.915468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.575 [2024-12-05 17:07:28.915567] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:54.575 [2024-12-05 17:07:28.916305] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:54.575 [2024-12-05 17:07:28.916335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.575 [2024-12-05 17:07:28.916343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:54.575 [2024-12-05 17:07:28.916354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:20:54.575 [2024-12-05 17:07:28.916362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.575 [2024-12-05 17:07:28.918331] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:54.575 [2024-12-05 17:07:28.931587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.575 [2024-12-05 17:07:28.931634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:54.575 [2024-12-05 17:07:28.931647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.260 ms 00:20:54.575 [2024-12-05 17:07:28.931657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.575 [2024-12-05 17:07:28.931753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.575 [2024-12-05 17:07:28.931766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:54.575 [2024-12-05 17:07:28.931775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:54.575 [2024-12-05 17:07:28.931785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.575 [2024-12-05 17:07:28.937780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.575 [2024-12-05 17:07:28.937821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:54.575 [2024-12-05 17:07:28.937831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.944 ms 00:20:54.575 [2024-12-05 17:07:28.937840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.575 [2024-12-05 17:07:28.937942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.575 [2024-12-05 17:07:28.937972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:54.576 [2024-12-05 17:07:28.937983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:54.576 [2024-12-05 17:07:28.937993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.576 [2024-12-05 17:07:28.938021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.576 [2024-12-05 17:07:28.938032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:54.576 [2024-12-05 17:07:28.938056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:54.576 [2024-12-05 17:07:28.938065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.576 [2024-12-05 17:07:28.938090] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:54.838 [2024-12-05 17:07:28.941729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.838 [2024-12-05 17:07:28.941885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:54.838 [2024-12-05 17:07:28.941906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.643 ms 00:20:54.838 [2024-12-05 17:07:28.941913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.838 [2024-12-05 17:07:28.942002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.838 [2024-12-05 17:07:28.942013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:54.838 [2024-12-05 17:07:28.942025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:54.838 [2024-12-05 17:07:28.942033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.838 [2024-12-05 17:07:28.942054] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:54.838 [2024-12-05 17:07:28.942074] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:54.838 [2024-12-05 17:07:28.942118] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:54.838 [2024-12-05 17:07:28.942133] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:54.838 [2024-12-05 17:07:28.942241] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:54.838 [2024-12-05 17:07:28.942254] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:54.838 [2024-12-05 17:07:28.942267] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:54.838 [2024-12-05 17:07:28.942278] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:54.838 [2024-12-05 17:07:28.942289] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:54.838 [2024-12-05 17:07:28.942297] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:54.838 [2024-12-05 17:07:28.942307] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:54.838 [2024-12-05 17:07:28.942314] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:54.838 [2024-12-05 17:07:28.942325] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:54.838 [2024-12-05 17:07:28.942332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.839 [2024-12-05 17:07:28.942341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:54.839 [2024-12-05 17:07:28.942349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:20:54.839 [2024-12-05 17:07:28.942360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.839 [2024-12-05 17:07:28.942453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.839 [2024-12-05 17:07:28.942463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:54.839 [2024-12-05 17:07:28.942470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:20:54.839 [2024-12-05 17:07:28.942479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.839 [2024-12-05 17:07:28.942580] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:54.839 [2024-12-05 17:07:28.942591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:54.839 [2024-12-05 17:07:28.942598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:54.839 [2024-12-05 17:07:28.942608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.839 [2024-12-05 17:07:28.942619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:54.839 [2024-12-05 17:07:28.942627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:54.839 [2024-12-05 17:07:28.942633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:54.839 [2024-12-05 17:07:28.942650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:54.839 [2024-12-05 17:07:28.942658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:54.839 [2024-12-05 17:07:28.942667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:54.839 [2024-12-05 17:07:28.942673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:54.839 [2024-12-05 17:07:28.942681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:54.839 [2024-12-05 17:07:28.942688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:54.839 [2024-12-05 17:07:28.942696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:54.839 [2024-12-05 17:07:28.942703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:54.839 [2024-12-05 17:07:28.942711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.839 [2024-12-05 17:07:28.942718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:54.839 [2024-12-05 17:07:28.942727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:54.839 [2024-12-05 17:07:28.942738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.839 [2024-12-05 17:07:28.942747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:54.839 [2024-12-05 17:07:28.942753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:54.839 [2024-12-05 17:07:28.942761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.839 [2024-12-05 17:07:28.942768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:54.839 [2024-12-05 17:07:28.942777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:54.839 [2024-12-05 17:07:28.942784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.839 [2024-12-05 17:07:28.942792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:54.839 [2024-12-05 17:07:28.942799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:54.839 [2024-12-05 17:07:28.942808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.839 [2024-12-05 17:07:28.942815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:54.839 [2024-12-05 17:07:28.942823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:54.839 [2024-12-05 17:07:28.942829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:54.839 [2024-12-05 17:07:28.942837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:54.839 [2024-12-05 17:07:28.942844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:54.839 [2024-12-05 17:07:28.942852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:54.839 [2024-12-05 17:07:28.942859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:54.839 [2024-12-05 17:07:28.942867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:54.839 [2024-12-05 17:07:28.942873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:54.839 [2024-12-05 17:07:28.942882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:54.839 [2024-12-05 17:07:28.942889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:54.839 [2024-12-05 17:07:28.942898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.839 [2024-12-05 17:07:28.942905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:54.839 [2024-12-05 17:07:28.942913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:54.839 [2024-12-05 17:07:28.942919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.839 [2024-12-05 17:07:28.942928] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:54.839 [2024-12-05 17:07:28.942937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:54.839 [2024-12-05 17:07:28.942945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:54.839 [2024-12-05 17:07:28.942976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:54.839 [2024-12-05 17:07:28.942986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:54.839 [2024-12-05 17:07:28.942993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:54.839 [2024-12-05 17:07:28.943002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:54.839 [2024-12-05 17:07:28.943012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:54.839 [2024-12-05 17:07:28.943021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:54.839 [2024-12-05 17:07:28.943027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:54.839 [2024-12-05 17:07:28.943038] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:54.839 [2024-12-05 17:07:28.943048] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:54.839 [2024-12-05 17:07:28.943062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:54.839 [2024-12-05 17:07:28.943070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:54.839 [2024-12-05 17:07:28.943079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:54.839 [2024-12-05 17:07:28.943086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:54.839 [2024-12-05 17:07:28.943095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:54.839 [2024-12-05 17:07:28.943102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:54.839 [2024-12-05 17:07:28.943111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:54.839 [2024-12-05 17:07:28.943118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:54.839 [2024-12-05 17:07:28.943127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:54.839 [2024-12-05 17:07:28.943135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:54.839 [2024-12-05 17:07:28.943143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:54.839 [2024-12-05 17:07:28.943152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:54.839 [2024-12-05 17:07:28.943161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:54.839 [2024-12-05 17:07:28.943169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:54.839 [2024-12-05 17:07:28.943178] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:54.840 [2024-12-05 17:07:28.943186] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:54.840 [2024-12-05 17:07:28.943197] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:54.840 [2024-12-05 17:07:28.943204] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:54.840 [2024-12-05 17:07:28.943213] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:54.840 [2024-12-05 17:07:28.943220] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:54.840 [2024-12-05 17:07:28.943230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.840 [2024-12-05 17:07:28.943237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:54.840 [2024-12-05 17:07:28.943249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:20:54.840 [2024-12-05 17:07:28.943256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.840 [2024-12-05 17:07:28.971866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.840 [2024-12-05 17:07:28.971912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:54.840 [2024-12-05 17:07:28.971928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.549 ms 00:20:54.840 [2024-12-05 17:07:28.971936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.840 [2024-12-05 17:07:28.972082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.840 [2024-12-05 17:07:28.972092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:54.840 [2024-12-05 17:07:28.972102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:20:54.840 [2024-12-05 17:07:28.972110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.840 [2024-12-05 17:07:29.004823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.840 [2024-12-05 17:07:29.004865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:54.840 [2024-12-05 17:07:29.004878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.686 ms 00:20:54.840 [2024-12-05 17:07:29.004886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.840 [2024-12-05 17:07:29.004987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.840 [2024-12-05 17:07:29.004998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:54.840 [2024-12-05 17:07:29.005009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:54.840 [2024-12-05 17:07:29.005016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.840 [2024-12-05 17:07:29.005477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.840 [2024-12-05 17:07:29.005508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:54.840 [2024-12-05 17:07:29.005520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:20:54.840 [2024-12-05 17:07:29.005528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.840 [2024-12-05 17:07:29.005672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.840 [2024-12-05 17:07:29.005681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:54.840 [2024-12-05 17:07:29.005691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:20:54.840 [2024-12-05 17:07:29.005699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.840 [2024-12-05 17:07:29.022633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.840 [2024-12-05 17:07:29.022679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:54.840 [2024-12-05 17:07:29.022693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.910 ms 00:20:54.840 [2024-12-05 17:07:29.022701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.840 [2024-12-05 17:07:29.055801] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:54.840 [2024-12-05 17:07:29.055863] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:54.840 [2024-12-05 17:07:29.055885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.840 [2024-12-05 17:07:29.055896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:54.840 [2024-12-05 17:07:29.055910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.069 ms 00:20:54.840 [2024-12-05 17:07:29.055925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.840 [2024-12-05 17:07:29.082397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.840 [2024-12-05 17:07:29.082450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:54.840 [2024-12-05 17:07:29.082466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.338 ms 00:20:54.840 [2024-12-05 17:07:29.082477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.840 [2024-12-05 17:07:29.095785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.840 [2024-12-05 17:07:29.095836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:54.840 [2024-12-05 17:07:29.095853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.201 ms 00:20:54.840 [2024-12-05 17:07:29.095860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.840 [2024-12-05 17:07:29.109083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.840 [2024-12-05 17:07:29.109276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:54.840 [2024-12-05 17:07:29.109304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.125 ms 00:20:54.840 [2024-12-05 17:07:29.109311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.840 [2024-12-05 17:07:29.109998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.840 [2024-12-05 17:07:29.110031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:54.840 [2024-12-05 17:07:29.110044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:20:54.840 [2024-12-05 17:07:29.110052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.840 [2024-12-05 17:07:29.178588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.840 [2024-12-05 17:07:29.178650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:54.840 [2024-12-05 17:07:29.178668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.503 ms 00:20:54.840 [2024-12-05 17:07:29.178677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.840 [2024-12-05 17:07:29.190202] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:55.101 [2024-12-05 17:07:29.209622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.101 [2024-12-05 17:07:29.209686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:55.101 [2024-12-05 17:07:29.209699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.839 ms 00:20:55.101 [2024-12-05 17:07:29.209709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.101 [2024-12-05 17:07:29.209827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.101 [2024-12-05 17:07:29.209841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:55.101 [2024-12-05 17:07:29.209850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:20:55.101 [2024-12-05 17:07:29.209861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.101 [2024-12-05 17:07:29.209917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.101 [2024-12-05 17:07:29.209928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:55.101 [2024-12-05 17:07:29.209939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:55.101 [2024-12-05 17:07:29.209986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.101 [2024-12-05 17:07:29.210012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.101 [2024-12-05 17:07:29.210025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:55.101 [2024-12-05 17:07:29.210034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:55.101 [2024-12-05 17:07:29.210045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.101 [2024-12-05 17:07:29.210080] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:55.101 [2024-12-05 17:07:29.210099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.101 [2024-12-05 17:07:29.210106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:55.101 [2024-12-05 17:07:29.210116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:55.101 [2024-12-05 17:07:29.210126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.101 [2024-12-05 17:07:29.237037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.101 [2024-12-05 17:07:29.237089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:55.101 [2024-12-05 17:07:29.237106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.880 ms 00:20:55.101 [2024-12-05 17:07:29.237115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.101 [2024-12-05 17:07:29.237248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.101 [2024-12-05 17:07:29.237259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:55.101 [2024-12-05 17:07:29.237274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:20:55.101 [2024-12-05 17:07:29.237282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.101 [2024-12-05 17:07:29.238341] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:55.101 [2024-12-05 17:07:29.241870] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 325.538 ms, result 0 00:20:55.101 [2024-12-05 17:07:29.244136] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:55.101 Some configs were skipped because the RPC state that can call them passed over. 00:20:55.101 17:07:29 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:55.363 [2024-12-05 17:07:29.489323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.363 [2024-12-05 17:07:29.489542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:55.363 [2024-12-05 17:07:29.489613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.271 ms 00:20:55.363 [2024-12-05 17:07:29.489642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.363 [2024-12-05 17:07:29.489705] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.653 ms, result 0 00:20:55.363 true 00:20:55.363 17:07:29 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:55.363 [2024-12-05 17:07:29.652651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.363 [2024-12-05 17:07:29.652781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:55.363 [2024-12-05 17:07:29.652833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.488 ms 00:20:55.363 [2024-12-05 17:07:29.652855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.363 true 00:20:55.363 [2024-12-05 17:07:29.652905] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.742 ms, result 0 00:20:55.363 17:07:29 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 76949 00:20:55.363 17:07:29 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76949 ']' 00:20:55.363 17:07:29 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76949 00:20:55.363 17:07:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:55.363 17:07:29 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:55.363 17:07:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76949 00:20:55.363 killing process with pid 76949 00:20:55.363 17:07:29 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:55.363 17:07:29 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:55.363 17:07:29 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76949' 00:20:55.363 17:07:29 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76949 00:20:55.363 17:07:29 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76949 00:20:56.307 [2024-12-05 17:07:30.319245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.307 [2024-12-05 17:07:30.319294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:56.307 [2024-12-05 17:07:30.319305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:56.307 [2024-12-05 17:07:30.319315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.307 [2024-12-05 17:07:30.319333] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:56.307 [2024-12-05 17:07:30.321554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.307 [2024-12-05 17:07:30.321581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:56.307 [2024-12-05 17:07:30.321592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.207 ms 00:20:56.307 [2024-12-05 17:07:30.321598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.307 [2024-12-05 17:07:30.321839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.307 [2024-12-05 17:07:30.321847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:56.307 [2024-12-05 17:07:30.321855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.205 ms 00:20:56.307 [2024-12-05 17:07:30.321860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.307 [2024-12-05 17:07:30.325135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.307 [2024-12-05 17:07:30.325162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:56.307 [2024-12-05 17:07:30.325171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.258 ms 00:20:56.307 [2024-12-05 17:07:30.325176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.307 [2024-12-05 17:07:30.330356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.307 [2024-12-05 17:07:30.330499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:56.307 [2024-12-05 17:07:30.330517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.150 ms 00:20:56.307 [2024-12-05 17:07:30.330523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.307 [2024-12-05 17:07:30.337852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.307 [2024-12-05 17:07:30.337966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:56.307 [2024-12-05 17:07:30.337982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.267 ms 00:20:56.307 [2024-12-05 17:07:30.337988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.307 [2024-12-05 17:07:30.345008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.307 [2024-12-05 17:07:30.345111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:56.307 [2024-12-05 17:07:30.345126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.990 ms 00:20:56.307 [2024-12-05 17:07:30.345132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.307 [2024-12-05 17:07:30.345240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.307 [2024-12-05 17:07:30.345248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:56.307 [2024-12-05 17:07:30.345255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:20:56.307 [2024-12-05 17:07:30.345261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.307 [2024-12-05 17:07:30.353231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.307 [2024-12-05 17:07:30.353256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:56.307 [2024-12-05 17:07:30.353265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.953 ms 00:20:56.307 [2024-12-05 17:07:30.353270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.307 [2024-12-05 17:07:30.360647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.307 [2024-12-05 17:07:30.360753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:56.307 [2024-12-05 17:07:30.360770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.345 ms 00:20:56.307 [2024-12-05 17:07:30.360776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.307 [2024-12-05 17:07:30.367864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.307 [2024-12-05 17:07:30.367960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:56.307 [2024-12-05 17:07:30.368012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.058 ms 00:20:56.307 [2024-12-05 17:07:30.368030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.307 [2024-12-05 17:07:30.375136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.307 [2024-12-05 17:07:30.375224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:56.307 [2024-12-05 17:07:30.375317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.048 ms 00:20:56.307 [2024-12-05 17:07:30.375335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.307 [2024-12-05 17:07:30.375368] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:56.307 [2024-12-05 17:07:30.375389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:56.307 [2024-12-05 17:07:30.375417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:56.307 [2024-12-05 17:07:30.375439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:56.307 [2024-12-05 17:07:30.375525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:56.307 [2024-12-05 17:07:30.375550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:56.307 [2024-12-05 17:07:30.375575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.375597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.375620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.375665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.375692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.375742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.375768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.375805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.375829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.375850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.375874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.375924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.375958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.375981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.376968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.377990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.378014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.378038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.378059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.378082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.378140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.378168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.378191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.378215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.378236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.378282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:56.308 [2024-12-05 17:07:30.378307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:56.309 [2024-12-05 17:07:30.378330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:56.309 [2024-12-05 17:07:30.378352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:56.309 [2024-12-05 17:07:30.378374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:56.309 [2024-12-05 17:07:30.378396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:56.309 [2024-12-05 17:07:30.378469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:56.309 [2024-12-05 17:07:30.378502] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:56.309 [2024-12-05 17:07:30.378522] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c197495b-421a-4479-a175-3609e74ac63a 00:20:56.309 [2024-12-05 17:07:30.378543] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:56.309 [2024-12-05 17:07:30.378559] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:56.309 [2024-12-05 17:07:30.378600] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:56.309 [2024-12-05 17:07:30.378620] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:56.309 [2024-12-05 17:07:30.378657] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:56.309 [2024-12-05 17:07:30.378676] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:56.309 [2024-12-05 17:07:30.378691] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:56.309 [2024-12-05 17:07:30.378725] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:56.309 [2024-12-05 17:07:30.378741] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:56.309 [2024-12-05 17:07:30.378757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.309 [2024-12-05 17:07:30.378772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:56.309 [2024-12-05 17:07:30.378789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.390 ms 00:20:56.309 [2024-12-05 17:07:30.378807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.309 [2024-12-05 17:07:30.388653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.309 [2024-12-05 17:07:30.388752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:56.309 [2024-12-05 17:07:30.388794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.782 ms 00:20:56.309 [2024-12-05 17:07:30.388811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.309 [2024-12-05 17:07:30.389123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:56.309 [2024-12-05 17:07:30.389184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:56.309 [2024-12-05 17:07:30.389223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:20:56.309 [2024-12-05 17:07:30.389240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.309 [2024-12-05 17:07:30.424498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.309 [2024-12-05 17:07:30.426588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:56.309 [2024-12-05 17:07:30.426605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.309 [2024-12-05 17:07:30.426612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.309 [2024-12-05 17:07:30.427556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.309 [2024-12-05 17:07:30.427584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:56.309 [2024-12-05 17:07:30.427593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.309 [2024-12-05 17:07:30.427599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.309 [2024-12-05 17:07:30.427637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.309 [2024-12-05 17:07:30.427644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:56.309 [2024-12-05 17:07:30.427654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.309 [2024-12-05 17:07:30.427660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.309 [2024-12-05 17:07:30.427675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.309 [2024-12-05 17:07:30.427680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:56.309 [2024-12-05 17:07:30.427688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.309 [2024-12-05 17:07:30.427695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.309 [2024-12-05 17:07:30.488178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.309 [2024-12-05 17:07:30.488211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:56.309 [2024-12-05 17:07:30.488221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.309 [2024-12-05 17:07:30.488227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.309 [2024-12-05 17:07:30.537329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.309 [2024-12-05 17:07:30.537363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:56.309 [2024-12-05 17:07:30.537374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.309 [2024-12-05 17:07:30.537380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.309 [2024-12-05 17:07:30.537438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.309 [2024-12-05 17:07:30.537446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:56.309 [2024-12-05 17:07:30.537455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.309 [2024-12-05 17:07:30.537460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.309 [2024-12-05 17:07:30.537483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.309 [2024-12-05 17:07:30.537490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:56.309 [2024-12-05 17:07:30.537497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.309 [2024-12-05 17:07:30.537502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.309 [2024-12-05 17:07:30.537573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.309 [2024-12-05 17:07:30.537580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:56.309 [2024-12-05 17:07:30.537587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.309 [2024-12-05 17:07:30.537593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.309 [2024-12-05 17:07:30.537618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.309 [2024-12-05 17:07:30.537625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:56.309 [2024-12-05 17:07:30.537632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.309 [2024-12-05 17:07:30.537638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.309 [2024-12-05 17:07:30.537669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.309 [2024-12-05 17:07:30.537676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:56.309 [2024-12-05 17:07:30.537685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.309 [2024-12-05 17:07:30.537691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.309 [2024-12-05 17:07:30.537724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:56.309 [2024-12-05 17:07:30.537732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:56.309 [2024-12-05 17:07:30.537739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:56.309 [2024-12-05 17:07:30.537745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:56.309 [2024-12-05 17:07:30.537851] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 218.590 ms, result 0 00:20:56.880 17:07:31 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:56.881 [2024-12-05 17:07:31.125064] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:20:56.881 [2024-12-05 17:07:31.125184] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76996 ] 00:20:57.141 [2024-12-05 17:07:31.282390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.141 [2024-12-05 17:07:31.366814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.403 [2024-12-05 17:07:31.576863] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:57.403 [2024-12-05 17:07:31.576923] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:57.403 [2024-12-05 17:07:31.728944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.403 [2024-12-05 17:07:31.728991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:57.403 [2024-12-05 17:07:31.729001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:57.403 [2024-12-05 17:07:31.729007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.403 [2024-12-05 17:07:31.731076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.403 [2024-12-05 17:07:31.731107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:57.403 [2024-12-05 17:07:31.731114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.057 ms 00:20:57.403 [2024-12-05 17:07:31.731120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.403 [2024-12-05 17:07:31.731179] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:57.403 [2024-12-05 17:07:31.731689] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:57.403 [2024-12-05 17:07:31.731710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.403 [2024-12-05 17:07:31.731717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:57.403 [2024-12-05 17:07:31.731724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:20:57.403 [2024-12-05 17:07:31.731729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.403 [2024-12-05 17:07:31.732939] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:57.403 [2024-12-05 17:07:31.742518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.403 [2024-12-05 17:07:31.742550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:57.403 [2024-12-05 17:07:31.742559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.581 ms 00:20:57.403 [2024-12-05 17:07:31.742566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.403 [2024-12-05 17:07:31.742639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.403 [2024-12-05 17:07:31.742649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:57.403 [2024-12-05 17:07:31.742656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:57.403 [2024-12-05 17:07:31.742661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.403 [2024-12-05 17:07:31.747045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.403 [2024-12-05 17:07:31.747069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:57.403 [2024-12-05 17:07:31.747077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.355 ms 00:20:57.403 [2024-12-05 17:07:31.747082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.403 [2024-12-05 17:07:31.747154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.403 [2024-12-05 17:07:31.747162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:57.403 [2024-12-05 17:07:31.747168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:57.403 [2024-12-05 17:07:31.747174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.403 [2024-12-05 17:07:31.747192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.403 [2024-12-05 17:07:31.747198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:57.403 [2024-12-05 17:07:31.747204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:57.403 [2024-12-05 17:07:31.747210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.403 [2024-12-05 17:07:31.747228] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:57.403 [2024-12-05 17:07:31.749882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.403 [2024-12-05 17:07:31.749907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:57.403 [2024-12-05 17:07:31.749915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.659 ms 00:20:57.403 [2024-12-05 17:07:31.749920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.403 [2024-12-05 17:07:31.749957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.403 [2024-12-05 17:07:31.749965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:57.403 [2024-12-05 17:07:31.749971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:57.403 [2024-12-05 17:07:31.749976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.403 [2024-12-05 17:07:31.749991] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:57.403 [2024-12-05 17:07:31.750005] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:57.403 [2024-12-05 17:07:31.750032] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:57.403 [2024-12-05 17:07:31.750044] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:57.403 [2024-12-05 17:07:31.750122] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:57.403 [2024-12-05 17:07:31.750130] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:57.403 [2024-12-05 17:07:31.750138] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:57.403 [2024-12-05 17:07:31.750147] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:57.403 [2024-12-05 17:07:31.750155] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:57.403 [2024-12-05 17:07:31.750161] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:57.403 [2024-12-05 17:07:31.750166] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:57.403 [2024-12-05 17:07:31.750172] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:57.403 [2024-12-05 17:07:31.750177] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:57.403 [2024-12-05 17:07:31.750183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.403 [2024-12-05 17:07:31.750188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:57.403 [2024-12-05 17:07:31.750194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:20:57.403 [2024-12-05 17:07:31.750199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.403 [2024-12-05 17:07:31.750266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.403 [2024-12-05 17:07:31.750275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:57.403 [2024-12-05 17:07:31.750281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:57.403 [2024-12-05 17:07:31.750286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.403 [2024-12-05 17:07:31.750360] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:57.403 [2024-12-05 17:07:31.750368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:57.403 [2024-12-05 17:07:31.750375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:57.403 [2024-12-05 17:07:31.750381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.403 [2024-12-05 17:07:31.750387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:57.403 [2024-12-05 17:07:31.750392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:57.403 [2024-12-05 17:07:31.750397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:57.404 [2024-12-05 17:07:31.750402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:57.404 [2024-12-05 17:07:31.750407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:57.404 [2024-12-05 17:07:31.750413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:57.404 [2024-12-05 17:07:31.750418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:57.404 [2024-12-05 17:07:31.750428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:57.404 [2024-12-05 17:07:31.750433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:57.404 [2024-12-05 17:07:31.750438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:57.404 [2024-12-05 17:07:31.750443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:57.404 [2024-12-05 17:07:31.750448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.404 [2024-12-05 17:07:31.750453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:57.404 [2024-12-05 17:07:31.750459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:57.404 [2024-12-05 17:07:31.750464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.404 [2024-12-05 17:07:31.750470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:57.404 [2024-12-05 17:07:31.750475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:57.404 [2024-12-05 17:07:31.750480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:57.404 [2024-12-05 17:07:31.750485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:57.404 [2024-12-05 17:07:31.750490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:57.404 [2024-12-05 17:07:31.750495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:57.404 [2024-12-05 17:07:31.750500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:57.404 [2024-12-05 17:07:31.750505] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:57.404 [2024-12-05 17:07:31.750510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:57.404 [2024-12-05 17:07:31.750515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:57.404 [2024-12-05 17:07:31.750521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:57.404 [2024-12-05 17:07:31.750525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:57.404 [2024-12-05 17:07:31.750530] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:57.404 [2024-12-05 17:07:31.750535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:57.404 [2024-12-05 17:07:31.750540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:57.404 [2024-12-05 17:07:31.750545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:57.404 [2024-12-05 17:07:31.750549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:57.404 [2024-12-05 17:07:31.750554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:57.404 [2024-12-05 17:07:31.750559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:57.404 [2024-12-05 17:07:31.750564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:57.404 [2024-12-05 17:07:31.750569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.404 [2024-12-05 17:07:31.750574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:57.404 [2024-12-05 17:07:31.750579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:57.404 [2024-12-05 17:07:31.750583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.404 [2024-12-05 17:07:31.750588] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:57.404 [2024-12-05 17:07:31.750594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:57.404 [2024-12-05 17:07:31.750601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:57.404 [2024-12-05 17:07:31.750606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:57.404 [2024-12-05 17:07:31.750612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:57.404 [2024-12-05 17:07:31.750617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:57.404 [2024-12-05 17:07:31.750623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:57.404 [2024-12-05 17:07:31.750628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:57.404 [2024-12-05 17:07:31.750633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:57.404 [2024-12-05 17:07:31.750638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:57.404 [2024-12-05 17:07:31.750645] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:57.404 [2024-12-05 17:07:31.750651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:57.404 [2024-12-05 17:07:31.750658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:57.404 [2024-12-05 17:07:31.750663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:57.404 [2024-12-05 17:07:31.750669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:57.404 [2024-12-05 17:07:31.750674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:57.404 [2024-12-05 17:07:31.750680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:57.404 [2024-12-05 17:07:31.750685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:57.404 [2024-12-05 17:07:31.750690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:57.404 [2024-12-05 17:07:31.750696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:57.404 [2024-12-05 17:07:31.750701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:57.404 [2024-12-05 17:07:31.750706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:57.404 [2024-12-05 17:07:31.750712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:57.404 [2024-12-05 17:07:31.750717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:57.404 [2024-12-05 17:07:31.750723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:57.404 [2024-12-05 17:07:31.750729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:57.404 [2024-12-05 17:07:31.750734] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:57.404 [2024-12-05 17:07:31.750740] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:57.404 [2024-12-05 17:07:31.750746] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:57.404 [2024-12-05 17:07:31.750751] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:57.404 [2024-12-05 17:07:31.750757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:57.404 [2024-12-05 17:07:31.750762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:57.404 [2024-12-05 17:07:31.750768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.404 [2024-12-05 17:07:31.750775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:57.404 [2024-12-05 17:07:31.750780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:20:57.404 [2024-12-05 17:07:31.750786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.666 [2024-12-05 17:07:31.771677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.666 [2024-12-05 17:07:31.771707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:57.666 [2024-12-05 17:07:31.771714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.851 ms 00:20:57.666 [2024-12-05 17:07:31.771720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.666 [2024-12-05 17:07:31.771815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.666 [2024-12-05 17:07:31.771822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:57.666 [2024-12-05 17:07:31.771829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:57.666 [2024-12-05 17:07:31.771834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.666 [2024-12-05 17:07:31.818058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.666 [2024-12-05 17:07:31.818092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:57.666 [2024-12-05 17:07:31.818104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.207 ms 00:20:57.666 [2024-12-05 17:07:31.818110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.666 [2024-12-05 17:07:31.818168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.666 [2024-12-05 17:07:31.818176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:57.666 [2024-12-05 17:07:31.818183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:57.666 [2024-12-05 17:07:31.818188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.666 [2024-12-05 17:07:31.818483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.666 [2024-12-05 17:07:31.818504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:57.666 [2024-12-05 17:07:31.818511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:20:57.666 [2024-12-05 17:07:31.818520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.666 [2024-12-05 17:07:31.818626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.666 [2024-12-05 17:07:31.818639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:57.666 [2024-12-05 17:07:31.818646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:20:57.666 [2024-12-05 17:07:31.818652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.666 [2024-12-05 17:07:31.829502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.666 [2024-12-05 17:07:31.829529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:57.666 [2024-12-05 17:07:31.829537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.835 ms 00:20:57.666 [2024-12-05 17:07:31.829542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.666 [2024-12-05 17:07:31.839203] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:57.666 [2024-12-05 17:07:31.839233] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:57.666 [2024-12-05 17:07:31.839242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.666 [2024-12-05 17:07:31.839249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:57.666 [2024-12-05 17:07:31.839255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.614 ms 00:20:57.666 [2024-12-05 17:07:31.839261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.666 [2024-12-05 17:07:31.857687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.666 [2024-12-05 17:07:31.857725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:57.667 [2024-12-05 17:07:31.857733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.378 ms 00:20:57.667 [2024-12-05 17:07:31.857739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.667 [2024-12-05 17:07:31.866701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.667 [2024-12-05 17:07:31.866729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:57.667 [2024-12-05 17:07:31.866737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.909 ms 00:20:57.667 [2024-12-05 17:07:31.866742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.667 [2024-12-05 17:07:31.875551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.667 [2024-12-05 17:07:31.875579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:57.667 [2024-12-05 17:07:31.875586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.768 ms 00:20:57.667 [2024-12-05 17:07:31.875592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.667 [2024-12-05 17:07:31.876054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.667 [2024-12-05 17:07:31.876076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:57.667 [2024-12-05 17:07:31.876083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:20:57.667 [2024-12-05 17:07:31.876089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.667 [2024-12-05 17:07:31.920389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.667 [2024-12-05 17:07:31.920427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:57.667 [2024-12-05 17:07:31.920437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.282 ms 00:20:57.667 [2024-12-05 17:07:31.920443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.667 [2024-12-05 17:07:31.928232] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:57.667 [2024-12-05 17:07:31.939585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.667 [2024-12-05 17:07:31.939613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:57.667 [2024-12-05 17:07:31.939622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.081 ms 00:20:57.667 [2024-12-05 17:07:31.939631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.667 [2024-12-05 17:07:31.939697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.667 [2024-12-05 17:07:31.939705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:57.667 [2024-12-05 17:07:31.939712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:57.667 [2024-12-05 17:07:31.939718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.667 [2024-12-05 17:07:31.939754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.667 [2024-12-05 17:07:31.939760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:57.667 [2024-12-05 17:07:31.939767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:57.667 [2024-12-05 17:07:31.939774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.667 [2024-12-05 17:07:31.939799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.667 [2024-12-05 17:07:31.939806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:57.667 [2024-12-05 17:07:31.939812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:57.667 [2024-12-05 17:07:31.939818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.667 [2024-12-05 17:07:31.939842] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:57.667 [2024-12-05 17:07:31.939849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.667 [2024-12-05 17:07:31.939855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:57.667 [2024-12-05 17:07:31.939861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:57.667 [2024-12-05 17:07:31.939867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.667 [2024-12-05 17:07:31.957880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.667 [2024-12-05 17:07:31.957910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:57.667 [2024-12-05 17:07:31.957918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.998 ms 00:20:57.667 [2024-12-05 17:07:31.957925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.667 [2024-12-05 17:07:31.958003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:57.667 [2024-12-05 17:07:31.958011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:57.667 [2024-12-05 17:07:31.958018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:57.667 [2024-12-05 17:07:31.958024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:57.667 [2024-12-05 17:07:31.958634] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:57.667 [2024-12-05 17:07:31.961081] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 229.477 ms, result 0 00:20:57.667 [2024-12-05 17:07:31.962058] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:57.667 [2024-12-05 17:07:31.976815] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:59.054  [2024-12-05T17:07:34.367Z] Copying: 14/256 [MB] (14 MBps) [2024-12-05T17:07:35.313Z] Copying: 34/256 [MB] (19 MBps) [2024-12-05T17:07:36.254Z] Copying: 48/256 [MB] (13 MBps) [2024-12-05T17:07:37.197Z] Copying: 71/256 [MB] (22 MBps) [2024-12-05T17:07:38.140Z] Copying: 94/256 [MB] (23 MBps) [2024-12-05T17:07:39.084Z] Copying: 117/256 [MB] (22 MBps) [2024-12-05T17:07:40.049Z] Copying: 138/256 [MB] (20 MBps) [2024-12-05T17:07:41.436Z] Copying: 160/256 [MB] (21 MBps) [2024-12-05T17:07:42.380Z] Copying: 181/256 [MB] (20 MBps) [2024-12-05T17:07:43.323Z] Copying: 203/256 [MB] (21 MBps) [2024-12-05T17:07:44.268Z] Copying: 224/256 [MB] (21 MBps) [2024-12-05T17:07:45.215Z] Copying: 235/256 [MB] (11 MBps) [2024-12-05T17:07:45.523Z] Copying: 252/256 [MB] (16 MBps) [2024-12-05T17:07:45.523Z] Copying: 256/256 [MB] (average 19 MBps)[2024-12-05 17:07:45.349602] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:11.156 [2024-12-05 17:07:45.361558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.156 [2024-12-05 17:07:45.361617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:11.156 [2024-12-05 17:07:45.361642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:11.156 [2024-12-05 17:07:45.361652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.156 [2024-12-05 17:07:45.361678] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:11.156 [2024-12-05 17:07:45.364646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.156 [2024-12-05 17:07:45.364715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:11.156 [2024-12-05 17:07:45.364729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.951 ms 00:21:11.156 [2024-12-05 17:07:45.364738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.156 [2024-12-05 17:07:45.365031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.156 [2024-12-05 17:07:45.365058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:11.156 [2024-12-05 17:07:45.365068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:21:11.157 [2024-12-05 17:07:45.365077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.157 [2024-12-05 17:07:45.368777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.157 [2024-12-05 17:07:45.368803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:11.157 [2024-12-05 17:07:45.368812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.678 ms 00:21:11.157 [2024-12-05 17:07:45.368820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.157 [2024-12-05 17:07:45.375758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.157 [2024-12-05 17:07:45.375799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:11.157 [2024-12-05 17:07:45.375811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.916 ms 00:21:11.157 [2024-12-05 17:07:45.375819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.157 [2024-12-05 17:07:45.403004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.157 [2024-12-05 17:07:45.403057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:11.157 [2024-12-05 17:07:45.403070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.115 ms 00:21:11.157 [2024-12-05 17:07:45.403079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.157 [2024-12-05 17:07:45.421008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.157 [2024-12-05 17:07:45.421059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:11.157 [2024-12-05 17:07:45.421079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.861 ms 00:21:11.157 [2024-12-05 17:07:45.421087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.157 [2024-12-05 17:07:45.421252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.157 [2024-12-05 17:07:45.421265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:11.157 [2024-12-05 17:07:45.421284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:21:11.157 [2024-12-05 17:07:45.421293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.157 [2024-12-05 17:07:45.449165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.157 [2024-12-05 17:07:45.449211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:11.157 [2024-12-05 17:07:45.449223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.851 ms 00:21:11.157 [2024-12-05 17:07:45.449231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.157 [2024-12-05 17:07:45.475124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.157 [2024-12-05 17:07:45.475182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:11.157 [2024-12-05 17:07:45.475193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.823 ms 00:21:11.157 [2024-12-05 17:07:45.475201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.157 [2024-12-05 17:07:45.500349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.157 [2024-12-05 17:07:45.500398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:11.157 [2024-12-05 17:07:45.500411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.083 ms 00:21:11.157 [2024-12-05 17:07:45.500418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.418 [2024-12-05 17:07:45.525487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.418 [2024-12-05 17:07:45.525533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:11.418 [2024-12-05 17:07:45.525546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.985 ms 00:21:11.418 [2024-12-05 17:07:45.525553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.418 [2024-12-05 17:07:45.525603] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:11.418 [2024-12-05 17:07:45.525620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:11.418 [2024-12-05 17:07:45.525865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.525873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.525880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.525888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.525895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.525902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.525910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.525918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.525925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.525933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.525940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.525968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.525977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.525985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.525993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:11.419 [2024-12-05 17:07:45.526448] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:11.419 [2024-12-05 17:07:45.526457] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c197495b-421a-4479-a175-3609e74ac63a 00:21:11.420 [2024-12-05 17:07:45.526466] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:11.420 [2024-12-05 17:07:45.526474] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:11.420 [2024-12-05 17:07:45.526482] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:11.420 [2024-12-05 17:07:45.526490] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:11.420 [2024-12-05 17:07:45.526497] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:11.420 [2024-12-05 17:07:45.526505] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:11.420 [2024-12-05 17:07:45.526516] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:11.420 [2024-12-05 17:07:45.526522] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:11.420 [2024-12-05 17:07:45.526529] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:11.420 [2024-12-05 17:07:45.526536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.420 [2024-12-05 17:07:45.526544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:11.420 [2024-12-05 17:07:45.526553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.935 ms 00:21:11.420 [2024-12-05 17:07:45.526560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.420 [2024-12-05 17:07:45.540204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.420 [2024-12-05 17:07:45.540249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:11.420 [2024-12-05 17:07:45.540261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.610 ms 00:21:11.420 [2024-12-05 17:07:45.540269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.420 [2024-12-05 17:07:45.540669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.420 [2024-12-05 17:07:45.540705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:11.420 [2024-12-05 17:07:45.540715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:21:11.420 [2024-12-05 17:07:45.540723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.420 [2024-12-05 17:07:45.579849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.420 [2024-12-05 17:07:45.579900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:11.420 [2024-12-05 17:07:45.579911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.420 [2024-12-05 17:07:45.579926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.420 [2024-12-05 17:07:45.580049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.420 [2024-12-05 17:07:45.580060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:11.420 [2024-12-05 17:07:45.580070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.420 [2024-12-05 17:07:45.580077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.420 [2024-12-05 17:07:45.580128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.420 [2024-12-05 17:07:45.580139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:11.420 [2024-12-05 17:07:45.580147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.420 [2024-12-05 17:07:45.580154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.420 [2024-12-05 17:07:45.580175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.420 [2024-12-05 17:07:45.580184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:11.420 [2024-12-05 17:07:45.580191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.420 [2024-12-05 17:07:45.580199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.420 [2024-12-05 17:07:45.665896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.420 [2024-12-05 17:07:45.665962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:11.420 [2024-12-05 17:07:45.665978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.420 [2024-12-05 17:07:45.665987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.420 [2024-12-05 17:07:45.736724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.420 [2024-12-05 17:07:45.736783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:11.420 [2024-12-05 17:07:45.736796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.420 [2024-12-05 17:07:45.736805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.420 [2024-12-05 17:07:45.736890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.420 [2024-12-05 17:07:45.736901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:11.420 [2024-12-05 17:07:45.736910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.420 [2024-12-05 17:07:45.736919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.420 [2024-12-05 17:07:45.736973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.420 [2024-12-05 17:07:45.736991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:11.420 [2024-12-05 17:07:45.737000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.420 [2024-12-05 17:07:45.737008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.420 [2024-12-05 17:07:45.737108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.420 [2024-12-05 17:07:45.737120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:11.420 [2024-12-05 17:07:45.737129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.420 [2024-12-05 17:07:45.737137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.420 [2024-12-05 17:07:45.737172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.420 [2024-12-05 17:07:45.737182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:11.420 [2024-12-05 17:07:45.737193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.420 [2024-12-05 17:07:45.737201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.420 [2024-12-05 17:07:45.737249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.420 [2024-12-05 17:07:45.737260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:11.420 [2024-12-05 17:07:45.737268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.420 [2024-12-05 17:07:45.737276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.420 [2024-12-05 17:07:45.737323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.420 [2024-12-05 17:07:45.737336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:11.420 [2024-12-05 17:07:45.737345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.420 [2024-12-05 17:07:45.737353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.420 [2024-12-05 17:07:45.737510] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 375.944 ms, result 0 00:21:12.362 00:21:12.362 00:21:12.362 17:07:46 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:12.935 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:21:12.935 17:07:47 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:21:12.935 17:07:47 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:21:12.935 17:07:47 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:12.935 17:07:47 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:12.935 17:07:47 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:21:12.935 17:07:47 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:21:12.935 Process with pid 76949 is not found 00:21:12.935 17:07:47 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 76949 00:21:12.935 17:07:47 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76949 ']' 00:21:12.935 17:07:47 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76949 00:21:12.935 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76949) - No such process 00:21:12.935 17:07:47 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 76949 is not found' 00:21:12.935 00:21:12.935 real 1m19.153s 00:21:12.935 user 1m45.197s 00:21:12.935 sys 0m5.881s 00:21:12.935 17:07:47 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:12.935 17:07:47 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:12.935 ************************************ 00:21:12.935 END TEST ftl_trim 00:21:12.935 ************************************ 00:21:12.935 17:07:47 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:12.935 17:07:47 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:12.935 17:07:47 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:12.935 17:07:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:12.935 ************************************ 00:21:12.935 START TEST ftl_restore 00:21:12.935 ************************************ 00:21:12.935 17:07:47 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:21:12.935 * Looking for test storage... 00:21:13.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:13.196 17:07:47 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:13.196 17:07:47 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:21:13.196 17:07:47 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:13.196 17:07:47 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:13.196 17:07:47 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:21:13.196 17:07:47 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:13.196 17:07:47 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:13.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.196 --rc genhtml_branch_coverage=1 00:21:13.196 --rc genhtml_function_coverage=1 00:21:13.196 --rc genhtml_legend=1 00:21:13.196 --rc geninfo_all_blocks=1 00:21:13.196 --rc geninfo_unexecuted_blocks=1 00:21:13.196 00:21:13.196 ' 00:21:13.196 17:07:47 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:13.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.196 --rc genhtml_branch_coverage=1 00:21:13.197 --rc genhtml_function_coverage=1 00:21:13.197 --rc genhtml_legend=1 00:21:13.197 --rc geninfo_all_blocks=1 00:21:13.197 --rc geninfo_unexecuted_blocks=1 00:21:13.197 00:21:13.197 ' 00:21:13.197 17:07:47 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:13.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.197 --rc genhtml_branch_coverage=1 00:21:13.197 --rc genhtml_function_coverage=1 00:21:13.197 --rc genhtml_legend=1 00:21:13.197 --rc geninfo_all_blocks=1 00:21:13.197 --rc geninfo_unexecuted_blocks=1 00:21:13.197 00:21:13.197 ' 00:21:13.197 17:07:47 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:13.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.197 --rc genhtml_branch_coverage=1 00:21:13.197 --rc genhtml_function_coverage=1 00:21:13.197 --rc genhtml_legend=1 00:21:13.197 --rc geninfo_all_blocks=1 00:21:13.197 --rc geninfo_unexecuted_blocks=1 00:21:13.197 00:21:13.197 ' 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.xV0xFIs67h 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77229 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77229 00:21:13.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.197 17:07:47 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77229 ']' 00:21:13.197 17:07:47 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.197 17:07:47 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:13.197 17:07:47 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.197 17:07:47 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:13.197 17:07:47 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:13.197 17:07:47 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:21:13.197 [2024-12-05 17:07:47.492795] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:21:13.197 [2024-12-05 17:07:47.492938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77229 ] 00:21:13.458 [2024-12-05 17:07:47.658074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.458 [2024-12-05 17:07:47.778987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.400 17:07:48 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.400 17:07:48 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:21:14.400 17:07:48 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:14.400 17:07:48 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:21:14.400 17:07:48 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:14.400 17:07:48 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:21:14.400 17:07:48 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:21:14.400 17:07:48 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:14.661 17:07:48 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:14.661 17:07:48 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:21:14.661 17:07:48 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:14.661 17:07:48 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:14.661 17:07:48 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:14.661 17:07:48 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:14.661 17:07:48 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:14.661 17:07:48 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:14.661 17:07:49 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:14.661 { 00:21:14.661 "name": "nvme0n1", 00:21:14.661 "aliases": [ 00:21:14.661 "8e614bb2-313f-44a9-b3cb-9b1e8359c54c" 00:21:14.661 ], 00:21:14.661 "product_name": "NVMe disk", 00:21:14.661 "block_size": 4096, 00:21:14.661 "num_blocks": 1310720, 00:21:14.661 "uuid": "8e614bb2-313f-44a9-b3cb-9b1e8359c54c", 00:21:14.661 "numa_id": -1, 00:21:14.661 "assigned_rate_limits": { 00:21:14.661 "rw_ios_per_sec": 0, 00:21:14.661 "rw_mbytes_per_sec": 0, 00:21:14.661 "r_mbytes_per_sec": 0, 00:21:14.661 "w_mbytes_per_sec": 0 00:21:14.661 }, 00:21:14.661 "claimed": true, 00:21:14.661 "claim_type": "read_many_write_one", 00:21:14.661 "zoned": false, 00:21:14.661 "supported_io_types": { 00:21:14.661 "read": true, 00:21:14.661 "write": true, 00:21:14.661 "unmap": true, 00:21:14.661 "flush": true, 00:21:14.661 "reset": true, 00:21:14.661 "nvme_admin": true, 00:21:14.661 "nvme_io": true, 00:21:14.661 "nvme_io_md": false, 00:21:14.661 "write_zeroes": true, 00:21:14.661 "zcopy": false, 00:21:14.661 "get_zone_info": false, 00:21:14.661 "zone_management": false, 00:21:14.661 "zone_append": false, 00:21:14.661 "compare": true, 00:21:14.661 "compare_and_write": false, 00:21:14.661 "abort": true, 00:21:14.661 "seek_hole": false, 00:21:14.661 "seek_data": false, 00:21:14.661 "copy": true, 00:21:14.661 "nvme_iov_md": false 00:21:14.661 }, 00:21:14.661 "driver_specific": { 00:21:14.661 "nvme": [ 00:21:14.661 { 00:21:14.661 "pci_address": "0000:00:11.0", 00:21:14.661 "trid": { 00:21:14.661 "trtype": "PCIe", 00:21:14.661 "traddr": "0000:00:11.0" 00:21:14.661 }, 00:21:14.661 "ctrlr_data": { 00:21:14.661 "cntlid": 0, 00:21:14.661 "vendor_id": "0x1b36", 00:21:14.661 "model_number": "QEMU NVMe Ctrl", 00:21:14.661 "serial_number": "12341", 00:21:14.661 "firmware_revision": "8.0.0", 00:21:14.661 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:14.661 "oacs": { 00:21:14.661 "security": 0, 00:21:14.661 "format": 1, 00:21:14.661 "firmware": 0, 00:21:14.661 "ns_manage": 1 00:21:14.661 }, 00:21:14.661 "multi_ctrlr": false, 00:21:14.661 "ana_reporting": false 00:21:14.661 }, 00:21:14.661 "vs": { 00:21:14.661 "nvme_version": "1.4" 00:21:14.661 }, 00:21:14.661 "ns_data": { 00:21:14.661 "id": 1, 00:21:14.661 "can_share": false 00:21:14.661 } 00:21:14.661 } 00:21:14.661 ], 00:21:14.661 "mp_policy": "active_passive" 00:21:14.661 } 00:21:14.661 } 00:21:14.661 ]' 00:21:14.661 17:07:49 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:14.922 17:07:49 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:14.922 17:07:49 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:14.922 17:07:49 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:14.922 17:07:49 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:14.922 17:07:49 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:21:14.922 17:07:49 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:21:14.922 17:07:49 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:14.922 17:07:49 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:21:14.922 17:07:49 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:14.922 17:07:49 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:14.922 17:07:49 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=80b2ddbb-6f95-4321-963e-14bcda1ad520 00:21:14.922 17:07:49 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:21:14.922 17:07:49 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 80b2ddbb-6f95-4321-963e-14bcda1ad520 00:21:15.183 17:07:49 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:15.446 17:07:49 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=dfc5814f-f9b5-41e9-8a7b-30043cb80add 00:21:15.446 17:07:49 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u dfc5814f-f9b5-41e9-8a7b-30043cb80add 00:21:15.707 17:07:49 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=50e27835-ac93-44fa-82a8-d6976e292bf2 00:21:15.707 17:07:49 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:21:15.707 17:07:49 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 50e27835-ac93-44fa-82a8-d6976e292bf2 00:21:15.707 17:07:49 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:21:15.707 17:07:49 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:15.707 17:07:49 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=50e27835-ac93-44fa-82a8-d6976e292bf2 00:21:15.707 17:07:49 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:21:15.707 17:07:49 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 50e27835-ac93-44fa-82a8-d6976e292bf2 00:21:15.707 17:07:49 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=50e27835-ac93-44fa-82a8-d6976e292bf2 00:21:15.707 17:07:49 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:15.707 17:07:49 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:15.707 17:07:49 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:15.707 17:07:49 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 50e27835-ac93-44fa-82a8-d6976e292bf2 00:21:15.969 17:07:50 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:15.969 { 00:21:15.969 "name": "50e27835-ac93-44fa-82a8-d6976e292bf2", 00:21:15.969 "aliases": [ 00:21:15.969 "lvs/nvme0n1p0" 00:21:15.969 ], 00:21:15.969 "product_name": "Logical Volume", 00:21:15.969 "block_size": 4096, 00:21:15.969 "num_blocks": 26476544, 00:21:15.969 "uuid": "50e27835-ac93-44fa-82a8-d6976e292bf2", 00:21:15.969 "assigned_rate_limits": { 00:21:15.969 "rw_ios_per_sec": 0, 00:21:15.969 "rw_mbytes_per_sec": 0, 00:21:15.969 "r_mbytes_per_sec": 0, 00:21:15.969 "w_mbytes_per_sec": 0 00:21:15.969 }, 00:21:15.969 "claimed": false, 00:21:15.969 "zoned": false, 00:21:15.969 "supported_io_types": { 00:21:15.969 "read": true, 00:21:15.969 "write": true, 00:21:15.969 "unmap": true, 00:21:15.969 "flush": false, 00:21:15.969 "reset": true, 00:21:15.969 "nvme_admin": false, 00:21:15.969 "nvme_io": false, 00:21:15.969 "nvme_io_md": false, 00:21:15.969 "write_zeroes": true, 00:21:15.969 "zcopy": false, 00:21:15.969 "get_zone_info": false, 00:21:15.969 "zone_management": false, 00:21:15.969 "zone_append": false, 00:21:15.969 "compare": false, 00:21:15.969 "compare_and_write": false, 00:21:15.969 "abort": false, 00:21:15.969 "seek_hole": true, 00:21:15.969 "seek_data": true, 00:21:15.969 "copy": false, 00:21:15.969 "nvme_iov_md": false 00:21:15.969 }, 00:21:15.969 "driver_specific": { 00:21:15.969 "lvol": { 00:21:15.969 "lvol_store_uuid": "dfc5814f-f9b5-41e9-8a7b-30043cb80add", 00:21:15.969 "base_bdev": "nvme0n1", 00:21:15.969 "thin_provision": true, 00:21:15.969 "num_allocated_clusters": 0, 00:21:15.969 "snapshot": false, 00:21:15.969 "clone": false, 00:21:15.969 "esnap_clone": false 00:21:15.969 } 00:21:15.969 } 00:21:15.969 } 00:21:15.969 ]' 00:21:15.969 17:07:50 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:15.969 17:07:50 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:15.969 17:07:50 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:15.969 17:07:50 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:15.969 17:07:50 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:15.969 17:07:50 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:15.969 17:07:50 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:21:15.969 17:07:50 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:21:15.969 17:07:50 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:16.230 17:07:50 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:16.231 17:07:50 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:16.231 17:07:50 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 50e27835-ac93-44fa-82a8-d6976e292bf2 00:21:16.231 17:07:50 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=50e27835-ac93-44fa-82a8-d6976e292bf2 00:21:16.231 17:07:50 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:16.231 17:07:50 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:16.231 17:07:50 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:16.231 17:07:50 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 50e27835-ac93-44fa-82a8-d6976e292bf2 00:21:16.492 17:07:50 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:16.492 { 00:21:16.492 "name": "50e27835-ac93-44fa-82a8-d6976e292bf2", 00:21:16.492 "aliases": [ 00:21:16.492 "lvs/nvme0n1p0" 00:21:16.492 ], 00:21:16.492 "product_name": "Logical Volume", 00:21:16.492 "block_size": 4096, 00:21:16.492 "num_blocks": 26476544, 00:21:16.492 "uuid": "50e27835-ac93-44fa-82a8-d6976e292bf2", 00:21:16.492 "assigned_rate_limits": { 00:21:16.492 "rw_ios_per_sec": 0, 00:21:16.492 "rw_mbytes_per_sec": 0, 00:21:16.492 "r_mbytes_per_sec": 0, 00:21:16.492 "w_mbytes_per_sec": 0 00:21:16.492 }, 00:21:16.492 "claimed": false, 00:21:16.492 "zoned": false, 00:21:16.492 "supported_io_types": { 00:21:16.492 "read": true, 00:21:16.492 "write": true, 00:21:16.492 "unmap": true, 00:21:16.492 "flush": false, 00:21:16.492 "reset": true, 00:21:16.492 "nvme_admin": false, 00:21:16.492 "nvme_io": false, 00:21:16.492 "nvme_io_md": false, 00:21:16.492 "write_zeroes": true, 00:21:16.492 "zcopy": false, 00:21:16.492 "get_zone_info": false, 00:21:16.492 "zone_management": false, 00:21:16.492 "zone_append": false, 00:21:16.492 "compare": false, 00:21:16.492 "compare_and_write": false, 00:21:16.492 "abort": false, 00:21:16.492 "seek_hole": true, 00:21:16.492 "seek_data": true, 00:21:16.492 "copy": false, 00:21:16.492 "nvme_iov_md": false 00:21:16.492 }, 00:21:16.492 "driver_specific": { 00:21:16.492 "lvol": { 00:21:16.492 "lvol_store_uuid": "dfc5814f-f9b5-41e9-8a7b-30043cb80add", 00:21:16.492 "base_bdev": "nvme0n1", 00:21:16.492 "thin_provision": true, 00:21:16.492 "num_allocated_clusters": 0, 00:21:16.492 "snapshot": false, 00:21:16.492 "clone": false, 00:21:16.492 "esnap_clone": false 00:21:16.492 } 00:21:16.492 } 00:21:16.492 } 00:21:16.492 ]' 00:21:16.492 17:07:50 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:16.492 17:07:50 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:16.492 17:07:50 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:16.492 17:07:50 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:16.492 17:07:50 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:16.492 17:07:50 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:16.492 17:07:50 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:21:16.492 17:07:50 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:16.753 17:07:50 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:21:16.753 17:07:50 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 50e27835-ac93-44fa-82a8-d6976e292bf2 00:21:16.753 17:07:50 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=50e27835-ac93-44fa-82a8-d6976e292bf2 00:21:16.753 17:07:50 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:16.753 17:07:50 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:21:16.753 17:07:50 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:21:16.753 17:07:50 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 50e27835-ac93-44fa-82a8-d6976e292bf2 00:21:17.014 17:07:51 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:17.014 { 00:21:17.014 "name": "50e27835-ac93-44fa-82a8-d6976e292bf2", 00:21:17.014 "aliases": [ 00:21:17.014 "lvs/nvme0n1p0" 00:21:17.014 ], 00:21:17.014 "product_name": "Logical Volume", 00:21:17.014 "block_size": 4096, 00:21:17.014 "num_blocks": 26476544, 00:21:17.014 "uuid": "50e27835-ac93-44fa-82a8-d6976e292bf2", 00:21:17.014 "assigned_rate_limits": { 00:21:17.014 "rw_ios_per_sec": 0, 00:21:17.014 "rw_mbytes_per_sec": 0, 00:21:17.014 "r_mbytes_per_sec": 0, 00:21:17.014 "w_mbytes_per_sec": 0 00:21:17.014 }, 00:21:17.014 "claimed": false, 00:21:17.014 "zoned": false, 00:21:17.014 "supported_io_types": { 00:21:17.014 "read": true, 00:21:17.014 "write": true, 00:21:17.014 "unmap": true, 00:21:17.014 "flush": false, 00:21:17.014 "reset": true, 00:21:17.014 "nvme_admin": false, 00:21:17.014 "nvme_io": false, 00:21:17.014 "nvme_io_md": false, 00:21:17.014 "write_zeroes": true, 00:21:17.014 "zcopy": false, 00:21:17.014 "get_zone_info": false, 00:21:17.014 "zone_management": false, 00:21:17.014 "zone_append": false, 00:21:17.015 "compare": false, 00:21:17.015 "compare_and_write": false, 00:21:17.015 "abort": false, 00:21:17.015 "seek_hole": true, 00:21:17.015 "seek_data": true, 00:21:17.015 "copy": false, 00:21:17.015 "nvme_iov_md": false 00:21:17.015 }, 00:21:17.015 "driver_specific": { 00:21:17.015 "lvol": { 00:21:17.015 "lvol_store_uuid": "dfc5814f-f9b5-41e9-8a7b-30043cb80add", 00:21:17.015 "base_bdev": "nvme0n1", 00:21:17.015 "thin_provision": true, 00:21:17.015 "num_allocated_clusters": 0, 00:21:17.015 "snapshot": false, 00:21:17.015 "clone": false, 00:21:17.015 "esnap_clone": false 00:21:17.015 } 00:21:17.015 } 00:21:17.015 } 00:21:17.015 ]' 00:21:17.015 17:07:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:17.015 17:07:51 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:21:17.015 17:07:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:17.015 17:07:51 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:17.015 17:07:51 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:17.015 17:07:51 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:21:17.015 17:07:51 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:21:17.015 17:07:51 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 50e27835-ac93-44fa-82a8-d6976e292bf2 --l2p_dram_limit 10' 00:21:17.015 17:07:51 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:21:17.015 17:07:51 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:21:17.015 17:07:51 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:17.015 17:07:51 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:21:17.015 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:21:17.015 17:07:51 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 50e27835-ac93-44fa-82a8-d6976e292bf2 --l2p_dram_limit 10 -c nvc0n1p0 00:21:17.277 [2024-12-05 17:07:51.468559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.277 [2024-12-05 17:07:51.468600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:17.277 [2024-12-05 17:07:51.468612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:17.277 [2024-12-05 17:07:51.468619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.278 [2024-12-05 17:07:51.468663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.278 [2024-12-05 17:07:51.468670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:17.278 [2024-12-05 17:07:51.468678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:17.278 [2024-12-05 17:07:51.468698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.278 [2024-12-05 17:07:51.468719] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:17.278 [2024-12-05 17:07:51.469316] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:17.278 [2024-12-05 17:07:51.469338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.278 [2024-12-05 17:07:51.469345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:17.278 [2024-12-05 17:07:51.469354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.626 ms 00:21:17.278 [2024-12-05 17:07:51.469359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.278 [2024-12-05 17:07:51.469412] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a74b0820-3859-4012-a4fc-e9af9488b070 00:21:17.278 [2024-12-05 17:07:51.470349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.278 [2024-12-05 17:07:51.470380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:17.278 [2024-12-05 17:07:51.470388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:17.278 [2024-12-05 17:07:51.470396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.278 [2024-12-05 17:07:51.475493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.278 [2024-12-05 17:07:51.475525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:17.278 [2024-12-05 17:07:51.475532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.063 ms 00:21:17.278 [2024-12-05 17:07:51.475539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.278 [2024-12-05 17:07:51.475606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.278 [2024-12-05 17:07:51.475615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:17.278 [2024-12-05 17:07:51.475622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:21:17.278 [2024-12-05 17:07:51.475632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.278 [2024-12-05 17:07:51.475670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.278 [2024-12-05 17:07:51.475678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:17.278 [2024-12-05 17:07:51.475686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:17.278 [2024-12-05 17:07:51.475693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.278 [2024-12-05 17:07:51.475710] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:17.278 [2024-12-05 17:07:51.478550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.278 [2024-12-05 17:07:51.478577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:17.278 [2024-12-05 17:07:51.478587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.843 ms 00:21:17.278 [2024-12-05 17:07:51.478593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.278 [2024-12-05 17:07:51.478621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.278 [2024-12-05 17:07:51.478627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:17.278 [2024-12-05 17:07:51.478635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:17.278 [2024-12-05 17:07:51.478641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.278 [2024-12-05 17:07:51.478654] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:17.278 [2024-12-05 17:07:51.478761] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:17.278 [2024-12-05 17:07:51.478773] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:17.278 [2024-12-05 17:07:51.478785] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:17.278 [2024-12-05 17:07:51.478794] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:17.278 [2024-12-05 17:07:51.478801] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:17.278 [2024-12-05 17:07:51.478809] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:17.278 [2024-12-05 17:07:51.478814] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:17.278 [2024-12-05 17:07:51.478824] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:17.278 [2024-12-05 17:07:51.478830] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:17.278 [2024-12-05 17:07:51.478837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.278 [2024-12-05 17:07:51.478847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:17.278 [2024-12-05 17:07:51.478855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:21:17.278 [2024-12-05 17:07:51.478860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.278 [2024-12-05 17:07:51.478928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.278 [2024-12-05 17:07:51.478934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:17.278 [2024-12-05 17:07:51.478941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:17.278 [2024-12-05 17:07:51.478947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.278 [2024-12-05 17:07:51.479036] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:17.278 [2024-12-05 17:07:51.479044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:17.278 [2024-12-05 17:07:51.479051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:17.278 [2024-12-05 17:07:51.479057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:17.278 [2024-12-05 17:07:51.479064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:17.278 [2024-12-05 17:07:51.479070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:17.278 [2024-12-05 17:07:51.479076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:17.278 [2024-12-05 17:07:51.479081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:17.278 [2024-12-05 17:07:51.479088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:17.278 [2024-12-05 17:07:51.479094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:17.278 [2024-12-05 17:07:51.479100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:17.278 [2024-12-05 17:07:51.479105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:17.278 [2024-12-05 17:07:51.479112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:17.278 [2024-12-05 17:07:51.479116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:17.278 [2024-12-05 17:07:51.479122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:17.278 [2024-12-05 17:07:51.479128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:17.278 [2024-12-05 17:07:51.479136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:17.278 [2024-12-05 17:07:51.479141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:17.278 [2024-12-05 17:07:51.479148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:17.278 [2024-12-05 17:07:51.479153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:17.278 [2024-12-05 17:07:51.479159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:17.278 [2024-12-05 17:07:51.479164] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:17.278 [2024-12-05 17:07:51.479170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:17.278 [2024-12-05 17:07:51.479175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:17.278 [2024-12-05 17:07:51.479181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:17.278 [2024-12-05 17:07:51.479186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:17.278 [2024-12-05 17:07:51.479192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:17.278 [2024-12-05 17:07:51.479198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:17.278 [2024-12-05 17:07:51.479204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:17.278 [2024-12-05 17:07:51.479209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:17.278 [2024-12-05 17:07:51.479215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:17.278 [2024-12-05 17:07:51.479220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:17.278 [2024-12-05 17:07:51.479227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:17.278 [2024-12-05 17:07:51.479232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:17.278 [2024-12-05 17:07:51.479239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:17.278 [2024-12-05 17:07:51.479245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:17.278 [2024-12-05 17:07:51.479250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:17.278 [2024-12-05 17:07:51.479255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:17.278 [2024-12-05 17:07:51.479261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:17.278 [2024-12-05 17:07:51.479266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:17.278 [2024-12-05 17:07:51.479272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:17.278 [2024-12-05 17:07:51.479277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:17.278 [2024-12-05 17:07:51.479283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:17.278 [2024-12-05 17:07:51.479287] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:17.278 [2024-12-05 17:07:51.479295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:17.278 [2024-12-05 17:07:51.479300] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:17.278 [2024-12-05 17:07:51.479307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:17.278 [2024-12-05 17:07:51.479314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:17.278 [2024-12-05 17:07:51.479322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:17.278 [2024-12-05 17:07:51.479327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:17.278 [2024-12-05 17:07:51.479334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:17.279 [2024-12-05 17:07:51.479338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:17.279 [2024-12-05 17:07:51.479344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:17.279 [2024-12-05 17:07:51.479351] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:17.279 [2024-12-05 17:07:51.479361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:17.279 [2024-12-05 17:07:51.479367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:17.279 [2024-12-05 17:07:51.479374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:17.279 [2024-12-05 17:07:51.479379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:17.279 [2024-12-05 17:07:51.479386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:17.279 [2024-12-05 17:07:51.479391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:17.279 [2024-12-05 17:07:51.479399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:17.279 [2024-12-05 17:07:51.479405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:17.279 [2024-12-05 17:07:51.479411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:17.279 [2024-12-05 17:07:51.479417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:17.279 [2024-12-05 17:07:51.479425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:17.279 [2024-12-05 17:07:51.479430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:17.279 [2024-12-05 17:07:51.479437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:17.279 [2024-12-05 17:07:51.479442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:17.279 [2024-12-05 17:07:51.479448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:17.279 [2024-12-05 17:07:51.479453] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:17.279 [2024-12-05 17:07:51.479461] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:17.279 [2024-12-05 17:07:51.479467] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:17.279 [2024-12-05 17:07:51.479474] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:17.279 [2024-12-05 17:07:51.479479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:17.279 [2024-12-05 17:07:51.479486] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:17.279 [2024-12-05 17:07:51.479492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:17.279 [2024-12-05 17:07:51.479498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:17.279 [2024-12-05 17:07:51.479504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:21:17.279 [2024-12-05 17:07:51.479510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:17.279 [2024-12-05 17:07:51.479540] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:17.279 [2024-12-05 17:07:51.479550] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:21.480 [2024-12-05 17:07:55.376975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.480 [2024-12-05 17:07:55.377065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:21.480 [2024-12-05 17:07:55.377083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3897.395 ms 00:21:21.480 [2024-12-05 17:07:55.377095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.480 [2024-12-05 17:07:55.408589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.480 [2024-12-05 17:07:55.408656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:21.480 [2024-12-05 17:07:55.408670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.248 ms 00:21:21.480 [2024-12-05 17:07:55.408703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.480 [2024-12-05 17:07:55.408844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.480 [2024-12-05 17:07:55.408858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:21.480 [2024-12-05 17:07:55.408868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:21:21.480 [2024-12-05 17:07:55.408885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.480 [2024-12-05 17:07:55.444126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.480 [2024-12-05 17:07:55.444185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:21.480 [2024-12-05 17:07:55.444197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.205 ms 00:21:21.480 [2024-12-05 17:07:55.444209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.480 [2024-12-05 17:07:55.444245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.480 [2024-12-05 17:07:55.444260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:21.480 [2024-12-05 17:07:55.444269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:21.480 [2024-12-05 17:07:55.444287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.480 [2024-12-05 17:07:55.444920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.480 [2024-12-05 17:07:55.444975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:21.480 [2024-12-05 17:07:55.444987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:21:21.480 [2024-12-05 17:07:55.444997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.480 [2024-12-05 17:07:55.445115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.480 [2024-12-05 17:07:55.445129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:21.480 [2024-12-05 17:07:55.445142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:21:21.480 [2024-12-05 17:07:55.445154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.480 [2024-12-05 17:07:55.462308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.480 [2024-12-05 17:07:55.462360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:21.480 [2024-12-05 17:07:55.462371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.135 ms 00:21:21.480 [2024-12-05 17:07:55.462381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.480 [2024-12-05 17:07:55.489568] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:21.480 [2024-12-05 17:07:55.493914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.480 [2024-12-05 17:07:55.493972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:21.480 [2024-12-05 17:07:55.493987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.446 ms 00:21:21.480 [2024-12-05 17:07:55.493995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.480 [2024-12-05 17:07:55.606065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.480 [2024-12-05 17:07:55.606124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:21.480 [2024-12-05 17:07:55.606141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.023 ms 00:21:21.480 [2024-12-05 17:07:55.606151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.480 [2024-12-05 17:07:55.606356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.480 [2024-12-05 17:07:55.606372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:21.480 [2024-12-05 17:07:55.606387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:21:21.480 [2024-12-05 17:07:55.606395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.480 [2024-12-05 17:07:55.632470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.480 [2024-12-05 17:07:55.632524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:21.480 [2024-12-05 17:07:55.632541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.019 ms 00:21:21.480 [2024-12-05 17:07:55.632549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.480 [2024-12-05 17:07:55.656898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.480 [2024-12-05 17:07:55.656956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:21.480 [2024-12-05 17:07:55.656972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.293 ms 00:21:21.480 [2024-12-05 17:07:55.656980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.480 [2024-12-05 17:07:55.657600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.480 [2024-12-05 17:07:55.657625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:21.480 [2024-12-05 17:07:55.657638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:21:21.480 [2024-12-05 17:07:55.657649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.480 [2024-12-05 17:07:55.748575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.480 [2024-12-05 17:07:55.748629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:21.480 [2024-12-05 17:07:55.748648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.882 ms 00:21:21.480 [2024-12-05 17:07:55.748657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.480 [2024-12-05 17:07:55.775493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.480 [2024-12-05 17:07:55.775540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:21.480 [2024-12-05 17:07:55.775556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.720 ms 00:21:21.480 [2024-12-05 17:07:55.775564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.480 [2024-12-05 17:07:55.800846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.480 [2024-12-05 17:07:55.800899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:21.480 [2024-12-05 17:07:55.800913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.232 ms 00:21:21.480 [2024-12-05 17:07:55.800921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.480 [2024-12-05 17:07:55.826672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.480 [2024-12-05 17:07:55.826721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:21.480 [2024-12-05 17:07:55.826735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.692 ms 00:21:21.480 [2024-12-05 17:07:55.826742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.480 [2024-12-05 17:07:55.826796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.480 [2024-12-05 17:07:55.826806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:21.480 [2024-12-05 17:07:55.826821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:21.480 [2024-12-05 17:07:55.826829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.480 [2024-12-05 17:07:55.826920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:21.480 [2024-12-05 17:07:55.826934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:21.480 [2024-12-05 17:07:55.826945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:21.480 [2024-12-05 17:07:55.826968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:21.480 [2024-12-05 17:07:55.828099] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4359.000 ms, result 0 00:21:21.480 { 00:21:21.480 "name": "ftl0", 00:21:21.480 "uuid": "a74b0820-3859-4012-a4fc-e9af9488b070" 00:21:21.480 } 00:21:21.741 17:07:55 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:21:21.741 17:07:55 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:21.741 17:07:56 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:21:21.741 17:07:56 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:22.002 [2024-12-05 17:07:56.271502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.002 [2024-12-05 17:07:56.271576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:22.002 [2024-12-05 17:07:56.271592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:22.002 [2024-12-05 17:07:56.271603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.002 [2024-12-05 17:07:56.271629] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:22.002 [2024-12-05 17:07:56.274675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.002 [2024-12-05 17:07:56.274723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:22.002 [2024-12-05 17:07:56.274737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.022 ms 00:21:22.002 [2024-12-05 17:07:56.274745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.002 [2024-12-05 17:07:56.275031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.002 [2024-12-05 17:07:56.275047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:22.002 [2024-12-05 17:07:56.275059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:21:22.002 [2024-12-05 17:07:56.275067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.002 [2024-12-05 17:07:56.278311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.002 [2024-12-05 17:07:56.278335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:22.002 [2024-12-05 17:07:56.278348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.226 ms 00:21:22.002 [2024-12-05 17:07:56.278355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.002 [2024-12-05 17:07:56.284565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.002 [2024-12-05 17:07:56.284607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:22.002 [2024-12-05 17:07:56.284625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.188 ms 00:21:22.002 [2024-12-05 17:07:56.284633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.002 [2024-12-05 17:07:56.311123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.002 [2024-12-05 17:07:56.311174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:22.002 [2024-12-05 17:07:56.311189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.382 ms 00:21:22.002 [2024-12-05 17:07:56.311196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.002 [2024-12-05 17:07:56.329462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.002 [2024-12-05 17:07:56.329514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:22.002 [2024-12-05 17:07:56.329529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.209 ms 00:21:22.002 [2024-12-05 17:07:56.329538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.002 [2024-12-05 17:07:56.329707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.002 [2024-12-05 17:07:56.329721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:22.002 [2024-12-05 17:07:56.329733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:21:22.002 [2024-12-05 17:07:56.329741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.002 [2024-12-05 17:07:56.354984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.002 [2024-12-05 17:07:56.355032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:22.002 [2024-12-05 17:07:56.355047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.217 ms 00:21:22.002 [2024-12-05 17:07:56.355054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.263 [2024-12-05 17:07:56.380105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.263 [2024-12-05 17:07:56.380154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:22.263 [2024-12-05 17:07:56.380168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.999 ms 00:21:22.263 [2024-12-05 17:07:56.380175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.263 [2024-12-05 17:07:56.404601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.263 [2024-12-05 17:07:56.404648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:22.263 [2024-12-05 17:07:56.404662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.371 ms 00:21:22.263 [2024-12-05 17:07:56.404669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.263 [2024-12-05 17:07:56.428946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.263 [2024-12-05 17:07:56.429001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:22.263 [2024-12-05 17:07:56.429014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.168 ms 00:21:22.263 [2024-12-05 17:07:56.429022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.263 [2024-12-05 17:07:56.429069] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:22.263 [2024-12-05 17:07:56.429085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:22.263 [2024-12-05 17:07:56.429103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:22.263 [2024-12-05 17:07:56.429112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:22.263 [2024-12-05 17:07:56.429122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:22.263 [2024-12-05 17:07:56.429130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:22.263 [2024-12-05 17:07:56.429140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:22.263 [2024-12-05 17:07:56.429148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:22.263 [2024-12-05 17:07:56.429161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:22.263 [2024-12-05 17:07:56.429170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:22.263 [2024-12-05 17:07:56.429180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:22.263 [2024-12-05 17:07:56.429187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:22.263 [2024-12-05 17:07:56.429198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:22.264 [2024-12-05 17:07:56.429976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:22.265 [2024-12-05 17:07:56.429988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:22.265 [2024-12-05 17:07:56.429996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:22.265 [2024-12-05 17:07:56.430006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:22.265 [2024-12-05 17:07:56.430022] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:22.265 [2024-12-05 17:07:56.430033] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a74b0820-3859-4012-a4fc-e9af9488b070 00:21:22.265 [2024-12-05 17:07:56.430041] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:22.265 [2024-12-05 17:07:56.430053] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:22.265 [2024-12-05 17:07:56.430063] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:22.265 [2024-12-05 17:07:56.430073] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:22.265 [2024-12-05 17:07:56.430080] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:22.265 [2024-12-05 17:07:56.430091] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:22.265 [2024-12-05 17:07:56.430098] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:22.265 [2024-12-05 17:07:56.430107] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:22.265 [2024-12-05 17:07:56.430113] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:22.265 [2024-12-05 17:07:56.430123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.265 [2024-12-05 17:07:56.430131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:22.265 [2024-12-05 17:07:56.430142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.056 ms 00:21:22.265 [2024-12-05 17:07:56.430153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.265 [2024-12-05 17:07:56.443877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.265 [2024-12-05 17:07:56.443924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:22.265 [2024-12-05 17:07:56.443937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.680 ms 00:21:22.265 [2024-12-05 17:07:56.443946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.265 [2024-12-05 17:07:56.444381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.265 [2024-12-05 17:07:56.444417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:22.265 [2024-12-05 17:07:56.444432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:21:22.265 [2024-12-05 17:07:56.444440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.265 [2024-12-05 17:07:56.490787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.265 [2024-12-05 17:07:56.490839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:22.265 [2024-12-05 17:07:56.490853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.265 [2024-12-05 17:07:56.490861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.265 [2024-12-05 17:07:56.490937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.265 [2024-12-05 17:07:56.490946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:22.265 [2024-12-05 17:07:56.490971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.265 [2024-12-05 17:07:56.490978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.265 [2024-12-05 17:07:56.491075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.265 [2024-12-05 17:07:56.491087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:22.265 [2024-12-05 17:07:56.491098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.265 [2024-12-05 17:07:56.491105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.265 [2024-12-05 17:07:56.491127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.265 [2024-12-05 17:07:56.491136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:22.265 [2024-12-05 17:07:56.491146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.265 [2024-12-05 17:07:56.491156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.265 [2024-12-05 17:07:56.573800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.265 [2024-12-05 17:07:56.573863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:22.265 [2024-12-05 17:07:56.573879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.265 [2024-12-05 17:07:56.573888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.526 [2024-12-05 17:07:56.642003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.526 [2024-12-05 17:07:56.642060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:22.526 [2024-12-05 17:07:56.642075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.526 [2024-12-05 17:07:56.642088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.526 [2024-12-05 17:07:56.642173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.526 [2024-12-05 17:07:56.642185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:22.526 [2024-12-05 17:07:56.642196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.526 [2024-12-05 17:07:56.642204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.526 [2024-12-05 17:07:56.642274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.526 [2024-12-05 17:07:56.642285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:22.526 [2024-12-05 17:07:56.642296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.526 [2024-12-05 17:07:56.642304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.526 [2024-12-05 17:07:56.642407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.526 [2024-12-05 17:07:56.642417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:22.526 [2024-12-05 17:07:56.642428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.526 [2024-12-05 17:07:56.642436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.526 [2024-12-05 17:07:56.642479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.526 [2024-12-05 17:07:56.642489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:22.526 [2024-12-05 17:07:56.642499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.526 [2024-12-05 17:07:56.642506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.526 [2024-12-05 17:07:56.642553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.526 [2024-12-05 17:07:56.642563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:22.526 [2024-12-05 17:07:56.642573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.526 [2024-12-05 17:07:56.642581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.526 [2024-12-05 17:07:56.642633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.526 [2024-12-05 17:07:56.642644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:22.526 [2024-12-05 17:07:56.642654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.526 [2024-12-05 17:07:56.642663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.526 [2024-12-05 17:07:56.642811] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 371.275 ms, result 0 00:21:22.526 true 00:21:22.526 17:07:56 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77229 00:21:22.526 17:07:56 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77229 ']' 00:21:22.526 17:07:56 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77229 00:21:22.526 17:07:56 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:21:22.526 17:07:56 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:22.526 17:07:56 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77229 00:21:22.526 17:07:56 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:22.526 killing process with pid 77229 00:21:22.526 17:07:56 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:22.526 17:07:56 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77229' 00:21:22.526 17:07:56 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77229 00:21:22.526 17:07:56 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77229 00:21:29.115 17:08:02 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:21:32.402 262144+0 records in 00:21:32.402 262144+0 records out 00:21:32.402 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.67335 s, 292 MB/s 00:21:32.402 17:08:06 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:33.777 17:08:08 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:34.037 [2024-12-05 17:08:08.148372] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:21:34.037 [2024-12-05 17:08:08.148476] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77465 ] 00:21:34.037 [2024-12-05 17:08:08.298810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.037 [2024-12-05 17:08:08.373824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.297 [2024-12-05 17:08:08.582119] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:34.297 [2024-12-05 17:08:08.582174] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:34.559 [2024-12-05 17:08:08.738430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.559 [2024-12-05 17:08:08.738482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:34.559 [2024-12-05 17:08:08.738496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:34.559 [2024-12-05 17:08:08.738504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.559 [2024-12-05 17:08:08.738553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.559 [2024-12-05 17:08:08.738566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:34.559 [2024-12-05 17:08:08.738575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:34.559 [2024-12-05 17:08:08.738582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.559 [2024-12-05 17:08:08.738601] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:34.559 [2024-12-05 17:08:08.739293] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:34.559 [2024-12-05 17:08:08.739316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.559 [2024-12-05 17:08:08.739324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:34.559 [2024-12-05 17:08:08.739333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:21:34.559 [2024-12-05 17:08:08.739341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.559 [2024-12-05 17:08:08.740600] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:34.559 [2024-12-05 17:08:08.753847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.559 [2024-12-05 17:08:08.753891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:34.559 [2024-12-05 17:08:08.753903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.248 ms 00:21:34.559 [2024-12-05 17:08:08.753911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.559 [2024-12-05 17:08:08.753985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.559 [2024-12-05 17:08:08.753995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:34.559 [2024-12-05 17:08:08.754009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:34.559 [2024-12-05 17:08:08.754017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.559 [2024-12-05 17:08:08.760790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.559 [2024-12-05 17:08:08.760833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:34.559 [2024-12-05 17:08:08.760843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.696 ms 00:21:34.559 [2024-12-05 17:08:08.760855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.559 [2024-12-05 17:08:08.760929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.559 [2024-12-05 17:08:08.760937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:34.559 [2024-12-05 17:08:08.760964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:34.559 [2024-12-05 17:08:08.760973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.559 [2024-12-05 17:08:08.761011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.559 [2024-12-05 17:08:08.761022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:34.559 [2024-12-05 17:08:08.761029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:34.559 [2024-12-05 17:08:08.761037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.559 [2024-12-05 17:08:08.761062] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:34.559 [2024-12-05 17:08:08.764622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.559 [2024-12-05 17:08:08.764658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:34.559 [2024-12-05 17:08:08.764670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.565 ms 00:21:34.559 [2024-12-05 17:08:08.764678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.559 [2024-12-05 17:08:08.764730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.559 [2024-12-05 17:08:08.764740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:34.559 [2024-12-05 17:08:08.764748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:34.559 [2024-12-05 17:08:08.764755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.559 [2024-12-05 17:08:08.764788] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:34.559 [2024-12-05 17:08:08.764811] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:34.559 [2024-12-05 17:08:08.764847] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:34.559 [2024-12-05 17:08:08.764864] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:34.559 [2024-12-05 17:08:08.764983] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:34.559 [2024-12-05 17:08:08.764994] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:34.559 [2024-12-05 17:08:08.765004] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:34.559 [2024-12-05 17:08:08.765015] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:34.559 [2024-12-05 17:08:08.765025] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:34.559 [2024-12-05 17:08:08.765033] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:34.559 [2024-12-05 17:08:08.765041] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:34.559 [2024-12-05 17:08:08.765051] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:34.559 [2024-12-05 17:08:08.765059] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:34.559 [2024-12-05 17:08:08.765067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.559 [2024-12-05 17:08:08.765075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:34.559 [2024-12-05 17:08:08.765082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:21:34.559 [2024-12-05 17:08:08.765089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.559 [2024-12-05 17:08:08.765172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.559 [2024-12-05 17:08:08.765180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:34.559 [2024-12-05 17:08:08.765187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:21:34.559 [2024-12-05 17:08:08.765194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.559 [2024-12-05 17:08:08.765315] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:34.559 [2024-12-05 17:08:08.765326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:34.559 [2024-12-05 17:08:08.765335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:34.560 [2024-12-05 17:08:08.765343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.560 [2024-12-05 17:08:08.765350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:34.560 [2024-12-05 17:08:08.765358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:34.560 [2024-12-05 17:08:08.765365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:34.560 [2024-12-05 17:08:08.765372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:34.560 [2024-12-05 17:08:08.765379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:34.560 [2024-12-05 17:08:08.765386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:34.560 [2024-12-05 17:08:08.765393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:34.560 [2024-12-05 17:08:08.765399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:34.560 [2024-12-05 17:08:08.765406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:34.560 [2024-12-05 17:08:08.765420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:34.560 [2024-12-05 17:08:08.765427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:34.560 [2024-12-05 17:08:08.765433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.560 [2024-12-05 17:08:08.765441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:34.560 [2024-12-05 17:08:08.765447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:34.560 [2024-12-05 17:08:08.765454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.560 [2024-12-05 17:08:08.765461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:34.560 [2024-12-05 17:08:08.765467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:34.560 [2024-12-05 17:08:08.765474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.560 [2024-12-05 17:08:08.765480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:34.560 [2024-12-05 17:08:08.765487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:34.560 [2024-12-05 17:08:08.765494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.560 [2024-12-05 17:08:08.765501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:34.560 [2024-12-05 17:08:08.765507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:34.560 [2024-12-05 17:08:08.765514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.560 [2024-12-05 17:08:08.765520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:34.560 [2024-12-05 17:08:08.765527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:34.560 [2024-12-05 17:08:08.765533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:34.560 [2024-12-05 17:08:08.765540] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:34.560 [2024-12-05 17:08:08.765547] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:34.560 [2024-12-05 17:08:08.765553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:34.560 [2024-12-05 17:08:08.765559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:34.560 [2024-12-05 17:08:08.765565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:34.560 [2024-12-05 17:08:08.765572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:34.560 [2024-12-05 17:08:08.765578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:34.560 [2024-12-05 17:08:08.765584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:34.560 [2024-12-05 17:08:08.765591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.560 [2024-12-05 17:08:08.765598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:34.560 [2024-12-05 17:08:08.765604] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:34.560 [2024-12-05 17:08:08.765610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.560 [2024-12-05 17:08:08.765616] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:34.560 [2024-12-05 17:08:08.765624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:34.560 [2024-12-05 17:08:08.765633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:34.560 [2024-12-05 17:08:08.765641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:34.560 [2024-12-05 17:08:08.765650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:34.560 [2024-12-05 17:08:08.765657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:34.560 [2024-12-05 17:08:08.765664] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:34.560 [2024-12-05 17:08:08.765670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:34.560 [2024-12-05 17:08:08.765676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:34.560 [2024-12-05 17:08:08.765683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:34.560 [2024-12-05 17:08:08.765691] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:34.560 [2024-12-05 17:08:08.765700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:34.560 [2024-12-05 17:08:08.765711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:34.560 [2024-12-05 17:08:08.765718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:34.560 [2024-12-05 17:08:08.765725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:34.560 [2024-12-05 17:08:08.765732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:34.560 [2024-12-05 17:08:08.765739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:34.560 [2024-12-05 17:08:08.765746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:34.560 [2024-12-05 17:08:08.765753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:34.560 [2024-12-05 17:08:08.765760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:34.560 [2024-12-05 17:08:08.765767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:34.560 [2024-12-05 17:08:08.765774] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:34.560 [2024-12-05 17:08:08.765782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:34.560 [2024-12-05 17:08:08.765789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:34.560 [2024-12-05 17:08:08.765796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:34.560 [2024-12-05 17:08:08.765803] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:34.560 [2024-12-05 17:08:08.765811] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:34.560 [2024-12-05 17:08:08.765819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:34.560 [2024-12-05 17:08:08.765827] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:34.560 [2024-12-05 17:08:08.765834] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:34.560 [2024-12-05 17:08:08.765842] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:34.560 [2024-12-05 17:08:08.765849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:34.560 [2024-12-05 17:08:08.765856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.560 [2024-12-05 17:08:08.765864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:34.560 [2024-12-05 17:08:08.765873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:21:34.560 [2024-12-05 17:08:08.765882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.560 [2024-12-05 17:08:08.795340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.560 [2024-12-05 17:08:08.795384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:34.560 [2024-12-05 17:08:08.795397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.414 ms 00:21:34.560 [2024-12-05 17:08:08.795410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.560 [2024-12-05 17:08:08.795494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.560 [2024-12-05 17:08:08.795503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:34.560 [2024-12-05 17:08:08.795512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:21:34.560 [2024-12-05 17:08:08.795520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.560 [2024-12-05 17:08:08.850122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.560 [2024-12-05 17:08:08.850178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:34.560 [2024-12-05 17:08:08.850192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.545 ms 00:21:34.560 [2024-12-05 17:08:08.850201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.560 [2024-12-05 17:08:08.850254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.560 [2024-12-05 17:08:08.850265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:34.560 [2024-12-05 17:08:08.850278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:34.560 [2024-12-05 17:08:08.850286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.560 [2024-12-05 17:08:08.850871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.560 [2024-12-05 17:08:08.850908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:34.560 [2024-12-05 17:08:08.850920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:21:34.560 [2024-12-05 17:08:08.850929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.560 [2024-12-05 17:08:08.851100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.560 [2024-12-05 17:08:08.851112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:34.560 [2024-12-05 17:08:08.851127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:21:34.560 [2024-12-05 17:08:08.851135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.561 [2024-12-05 17:08:08.867111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.561 [2024-12-05 17:08:08.867161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:34.561 [2024-12-05 17:08:08.867174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.955 ms 00:21:34.561 [2024-12-05 17:08:08.867183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.561 [2024-12-05 17:08:08.881307] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:34.561 [2024-12-05 17:08:08.881361] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:34.561 [2024-12-05 17:08:08.881375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.561 [2024-12-05 17:08:08.881384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:34.561 [2024-12-05 17:08:08.881393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.077 ms 00:21:34.561 [2024-12-05 17:08:08.881400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.561 [2024-12-05 17:08:08.906779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.561 [2024-12-05 17:08:08.906852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:34.561 [2024-12-05 17:08:08.906864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.283 ms 00:21:34.561 [2024-12-05 17:08:08.906873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.561 [2024-12-05 17:08:08.919580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.561 [2024-12-05 17:08:08.919630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:34.561 [2024-12-05 17:08:08.919641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.655 ms 00:21:34.561 [2024-12-05 17:08:08.919649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.821 [2024-12-05 17:08:08.932167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.821 [2024-12-05 17:08:08.932218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:34.821 [2024-12-05 17:08:08.932231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.472 ms 00:21:34.821 [2024-12-05 17:08:08.932238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.821 [2024-12-05 17:08:08.932914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.821 [2024-12-05 17:08:08.932946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:34.821 [2024-12-05 17:08:08.932973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:21:34.821 [2024-12-05 17:08:08.932985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.821 [2024-12-05 17:08:08.999352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.821 [2024-12-05 17:08:08.999417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:34.821 [2024-12-05 17:08:08.999434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.347 ms 00:21:34.821 [2024-12-05 17:08:08.999449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.821 [2024-12-05 17:08:09.010812] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:34.821 [2024-12-05 17:08:09.014126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.821 [2024-12-05 17:08:09.014171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:34.821 [2024-12-05 17:08:09.014183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.618 ms 00:21:34.821 [2024-12-05 17:08:09.014192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.821 [2024-12-05 17:08:09.014279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.821 [2024-12-05 17:08:09.014291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:34.821 [2024-12-05 17:08:09.014300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:34.821 [2024-12-05 17:08:09.014308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.822 [2024-12-05 17:08:09.014385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.822 [2024-12-05 17:08:09.014396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:34.822 [2024-12-05 17:08:09.014405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:34.822 [2024-12-05 17:08:09.014413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.822 [2024-12-05 17:08:09.014435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.822 [2024-12-05 17:08:09.014444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:34.822 [2024-12-05 17:08:09.014453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:34.822 [2024-12-05 17:08:09.014461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.822 [2024-12-05 17:08:09.014497] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:34.822 [2024-12-05 17:08:09.014510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.822 [2024-12-05 17:08:09.014518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:34.822 [2024-12-05 17:08:09.014527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:34.822 [2024-12-05 17:08:09.014534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.822 [2024-12-05 17:08:09.040174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.822 [2024-12-05 17:08:09.040228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:34.822 [2024-12-05 17:08:09.040241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.620 ms 00:21:34.822 [2024-12-05 17:08:09.040256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.822 [2024-12-05 17:08:09.040339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:34.822 [2024-12-05 17:08:09.040349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:34.822 [2024-12-05 17:08:09.040359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:34.822 [2024-12-05 17:08:09.040367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:34.822 [2024-12-05 17:08:09.042428] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 303.480 ms, result 0 00:21:35.762  [2024-12-05T17:08:11.071Z] Copying: 11/1024 [MB] (11 MBps) [2024-12-05T17:08:12.459Z] Copying: 41/1024 [MB] (29 MBps) [2024-12-05T17:08:13.404Z] Copying: 54/1024 [MB] (13 MBps) [2024-12-05T17:08:14.348Z] Copying: 66/1024 [MB] (12 MBps) [2024-12-05T17:08:15.293Z] Copying: 82/1024 [MB] (15 MBps) [2024-12-05T17:08:16.234Z] Copying: 95/1024 [MB] (13 MBps) [2024-12-05T17:08:17.179Z] Copying: 111/1024 [MB] (15 MBps) [2024-12-05T17:08:18.123Z] Copying: 128/1024 [MB] (17 MBps) [2024-12-05T17:08:19.142Z] Copying: 146/1024 [MB] (17 MBps) [2024-12-05T17:08:20.101Z] Copying: 179/1024 [MB] (33 MBps) [2024-12-05T17:08:21.483Z] Copying: 192/1024 [MB] (13 MBps) [2024-12-05T17:08:22.055Z] Copying: 204/1024 [MB] (11 MBps) [2024-12-05T17:08:23.444Z] Copying: 217/1024 [MB] (13 MBps) [2024-12-05T17:08:24.388Z] Copying: 233/1024 [MB] (15 MBps) [2024-12-05T17:08:25.340Z] Copying: 249/1024 [MB] (16 MBps) [2024-12-05T17:08:26.283Z] Copying: 269/1024 [MB] (20 MBps) [2024-12-05T17:08:27.225Z] Copying: 288/1024 [MB] (19 MBps) [2024-12-05T17:08:28.168Z] Copying: 304/1024 [MB] (15 MBps) [2024-12-05T17:08:29.113Z] Copying: 326/1024 [MB] (22 MBps) [2024-12-05T17:08:30.059Z] Copying: 349/1024 [MB] (22 MBps) [2024-12-05T17:08:31.448Z] Copying: 363/1024 [MB] (14 MBps) [2024-12-05T17:08:32.392Z] Copying: 377/1024 [MB] (13 MBps) [2024-12-05T17:08:33.338Z] Copying: 397/1024 [MB] (20 MBps) [2024-12-05T17:08:34.283Z] Copying: 416/1024 [MB] (19 MBps) [2024-12-05T17:08:35.231Z] Copying: 433/1024 [MB] (16 MBps) [2024-12-05T17:08:36.174Z] Copying: 454/1024 [MB] (21 MBps) [2024-12-05T17:08:37.119Z] Copying: 472/1024 [MB] (18 MBps) [2024-12-05T17:08:38.063Z] Copying: 493/1024 [MB] (20 MBps) [2024-12-05T17:08:39.450Z] Copying: 509/1024 [MB] (16 MBps) [2024-12-05T17:08:40.392Z] Copying: 519/1024 [MB] (10 MBps) [2024-12-05T17:08:41.337Z] Copying: 533/1024 [MB] (13 MBps) [2024-12-05T17:08:42.282Z] Copying: 545/1024 [MB] (11 MBps) [2024-12-05T17:08:43.225Z] Copying: 560/1024 [MB] (15 MBps) [2024-12-05T17:08:44.170Z] Copying: 573/1024 [MB] (13 MBps) [2024-12-05T17:08:45.112Z] Copying: 592/1024 [MB] (18 MBps) [2024-12-05T17:08:46.054Z] Copying: 620/1024 [MB] (28 MBps) [2024-12-05T17:08:47.443Z] Copying: 643/1024 [MB] (22 MBps) [2024-12-05T17:08:48.084Z] Copying: 670/1024 [MB] (27 MBps) [2024-12-05T17:08:49.057Z] Copying: 690/1024 [MB] (19 MBps) [2024-12-05T17:08:50.441Z] Copying: 713/1024 [MB] (22 MBps) [2024-12-05T17:08:51.384Z] Copying: 735/1024 [MB] (22 MBps) [2024-12-05T17:08:52.327Z] Copying: 766/1024 [MB] (31 MBps) [2024-12-05T17:08:53.272Z] Copying: 782/1024 [MB] (15 MBps) [2024-12-05T17:08:54.218Z] Copying: 810/1024 [MB] (28 MBps) [2024-12-05T17:08:55.163Z] Copying: 829/1024 [MB] (18 MBps) [2024-12-05T17:08:56.105Z] Copying: 856/1024 [MB] (26 MBps) [2024-12-05T17:08:57.491Z] Copying: 887/1024 [MB] (31 MBps) [2024-12-05T17:08:58.063Z] Copying: 908/1024 [MB] (20 MBps) [2024-12-05T17:08:59.452Z] Copying: 928/1024 [MB] (19 MBps) [2024-12-05T17:09:00.398Z] Copying: 958/1024 [MB] (30 MBps) [2024-12-05T17:09:01.342Z] Copying: 981/1024 [MB] (22 MBps) [2024-12-05T17:09:01.915Z] Copying: 998/1024 [MB] (17 MBps) [2024-12-05T17:09:01.916Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-12-05 17:09:01.786085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.549 [2024-12-05 17:09:01.786122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:27.549 [2024-12-05 17:09:01.786133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:27.549 [2024-12-05 17:09:01.786140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.549 [2024-12-05 17:09:01.786156] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:27.549 [2024-12-05 17:09:01.788294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.549 [2024-12-05 17:09:01.788316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:27.549 [2024-12-05 17:09:01.788329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.126 ms 00:22:27.549 [2024-12-05 17:09:01.788335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.549 [2024-12-05 17:09:01.790091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.549 [2024-12-05 17:09:01.790120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:27.549 [2024-12-05 17:09:01.790127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.740 ms 00:22:27.549 [2024-12-05 17:09:01.790133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.549 [2024-12-05 17:09:01.802359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.549 [2024-12-05 17:09:01.802390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:27.549 [2024-12-05 17:09:01.802397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.213 ms 00:22:27.549 [2024-12-05 17:09:01.802403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.549 [2024-12-05 17:09:01.807200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.549 [2024-12-05 17:09:01.807225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:27.549 [2024-12-05 17:09:01.807232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.769 ms 00:22:27.549 [2024-12-05 17:09:01.807239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.549 [2024-12-05 17:09:01.825523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.549 [2024-12-05 17:09:01.825550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:27.549 [2024-12-05 17:09:01.825558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.247 ms 00:22:27.549 [2024-12-05 17:09:01.825564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.549 [2024-12-05 17:09:01.836953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.549 [2024-12-05 17:09:01.836981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:27.549 [2024-12-05 17:09:01.836989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.360 ms 00:22:27.549 [2024-12-05 17:09:01.836995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.549 [2024-12-05 17:09:01.837084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.549 [2024-12-05 17:09:01.837093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:27.549 [2024-12-05 17:09:01.837099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:22:27.549 [2024-12-05 17:09:01.837104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.549 [2024-12-05 17:09:01.855336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.549 [2024-12-05 17:09:01.855362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:27.549 [2024-12-05 17:09:01.855369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.222 ms 00:22:27.549 [2024-12-05 17:09:01.855375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.549 [2024-12-05 17:09:01.872887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.549 [2024-12-05 17:09:01.872913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:27.549 [2024-12-05 17:09:01.872921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.487 ms 00:22:27.549 [2024-12-05 17:09:01.872926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.549 [2024-12-05 17:09:01.890075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.549 [2024-12-05 17:09:01.890102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:27.549 [2024-12-05 17:09:01.890110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.115 ms 00:22:27.549 [2024-12-05 17:09:01.890115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.549 [2024-12-05 17:09:01.906786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.549 [2024-12-05 17:09:01.906812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:27.549 [2024-12-05 17:09:01.906819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.629 ms 00:22:27.549 [2024-12-05 17:09:01.906825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.549 [2024-12-05 17:09:01.906849] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:27.549 [2024-12-05 17:09:01.906860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.906871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.906877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.906882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.906888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.906893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.906899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.906904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.906910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.906915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.906921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.906926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.906932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.906937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.906942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.906955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.906961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.906966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.906972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.906977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.906983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.906988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.906994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.906999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.907005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.907010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.907016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.907021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.907027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.907034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.907041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.907049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.907055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.907060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.907066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.907071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.907077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.907082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.907088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.907094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.907099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.907105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.907110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.907115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.907121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:27.549 [2024-12-05 17:09:01.907126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:27.550 [2024-12-05 17:09:01.907435] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:27.550 [2024-12-05 17:09:01.907443] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a74b0820-3859-4012-a4fc-e9af9488b070 00:22:27.550 [2024-12-05 17:09:01.907449] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:27.550 [2024-12-05 17:09:01.907454] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:27.550 [2024-12-05 17:09:01.907460] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:27.550 [2024-12-05 17:09:01.907465] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:27.550 [2024-12-05 17:09:01.907470] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:27.550 [2024-12-05 17:09:01.907480] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:27.550 [2024-12-05 17:09:01.907486] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:27.550 [2024-12-05 17:09:01.907490] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:27.550 [2024-12-05 17:09:01.907495] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:27.550 [2024-12-05 17:09:01.907500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.550 [2024-12-05 17:09:01.907505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:27.550 [2024-12-05 17:09:01.907511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:22:27.550 [2024-12-05 17:09:01.907516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.810 [2024-12-05 17:09:01.916938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.810 [2024-12-05 17:09:01.916970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:27.810 [2024-12-05 17:09:01.916977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.409 ms 00:22:27.810 [2024-12-05 17:09:01.916983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.810 [2024-12-05 17:09:01.917244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.810 [2024-12-05 17:09:01.917256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:27.810 [2024-12-05 17:09:01.917263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:22:27.810 [2024-12-05 17:09:01.917272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.810 [2024-12-05 17:09:01.943029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.810 [2024-12-05 17:09:01.943059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:27.810 [2024-12-05 17:09:01.943066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.811 [2024-12-05 17:09:01.943072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.811 [2024-12-05 17:09:01.943116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.811 [2024-12-05 17:09:01.943122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:27.811 [2024-12-05 17:09:01.943129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.811 [2024-12-05 17:09:01.943136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.811 [2024-12-05 17:09:01.943178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.811 [2024-12-05 17:09:01.943185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:27.811 [2024-12-05 17:09:01.943191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.811 [2024-12-05 17:09:01.943196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.811 [2024-12-05 17:09:01.943207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.811 [2024-12-05 17:09:01.943213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:27.811 [2024-12-05 17:09:01.943219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.811 [2024-12-05 17:09:01.943224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.811 [2024-12-05 17:09:02.002335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.811 [2024-12-05 17:09:02.002368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:27.811 [2024-12-05 17:09:02.002377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.811 [2024-12-05 17:09:02.002383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.811 [2024-12-05 17:09:02.051129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.811 [2024-12-05 17:09:02.051162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:27.811 [2024-12-05 17:09:02.051170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.811 [2024-12-05 17:09:02.051181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.811 [2024-12-05 17:09:02.051233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.811 [2024-12-05 17:09:02.051241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:27.811 [2024-12-05 17:09:02.051247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.811 [2024-12-05 17:09:02.051252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.811 [2024-12-05 17:09:02.051278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.811 [2024-12-05 17:09:02.051284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:27.811 [2024-12-05 17:09:02.051289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.811 [2024-12-05 17:09:02.051294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.811 [2024-12-05 17:09:02.051362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.811 [2024-12-05 17:09:02.051370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:27.811 [2024-12-05 17:09:02.051376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.811 [2024-12-05 17:09:02.051382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.811 [2024-12-05 17:09:02.051405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.811 [2024-12-05 17:09:02.051411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:27.811 [2024-12-05 17:09:02.051417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.811 [2024-12-05 17:09:02.051423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.811 [2024-12-05 17:09:02.051450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.811 [2024-12-05 17:09:02.051459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:27.811 [2024-12-05 17:09:02.051465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.811 [2024-12-05 17:09:02.051471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.811 [2024-12-05 17:09:02.051503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.811 [2024-12-05 17:09:02.051516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:27.811 [2024-12-05 17:09:02.051522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.811 [2024-12-05 17:09:02.051528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.811 [2024-12-05 17:09:02.051616] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 265.507 ms, result 0 00:22:28.381 00:22:28.381 00:22:28.643 17:09:02 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:22:28.643 [2024-12-05 17:09:02.820647] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:22:28.643 [2024-12-05 17:09:02.820775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78030 ] 00:22:28.643 [2024-12-05 17:09:02.975309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.904 [2024-12-05 17:09:03.053772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.904 [2024-12-05 17:09:03.262686] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:28.904 [2024-12-05 17:09:03.262742] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:29.165 [2024-12-05 17:09:03.409693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.165 [2024-12-05 17:09:03.409732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:29.165 [2024-12-05 17:09:03.409742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:29.165 [2024-12-05 17:09:03.409749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.165 [2024-12-05 17:09:03.409781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.165 [2024-12-05 17:09:03.409790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:29.165 [2024-12-05 17:09:03.409797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:29.165 [2024-12-05 17:09:03.409802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.165 [2024-12-05 17:09:03.409815] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:29.165 [2024-12-05 17:09:03.410366] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:29.165 [2024-12-05 17:09:03.410378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.165 [2024-12-05 17:09:03.410384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:29.165 [2024-12-05 17:09:03.410390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:22:29.165 [2024-12-05 17:09:03.410396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.165 [2024-12-05 17:09:03.411331] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:29.165 [2024-12-05 17:09:03.420840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.165 [2024-12-05 17:09:03.420872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:29.165 [2024-12-05 17:09:03.420881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.510 ms 00:22:29.165 [2024-12-05 17:09:03.420888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.165 [2024-12-05 17:09:03.420931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.165 [2024-12-05 17:09:03.420939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:29.165 [2024-12-05 17:09:03.420946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:29.165 [2024-12-05 17:09:03.420960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.165 [2024-12-05 17:09:03.425322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.165 [2024-12-05 17:09:03.425348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:29.165 [2024-12-05 17:09:03.425359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.324 ms 00:22:29.165 [2024-12-05 17:09:03.425364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.165 [2024-12-05 17:09:03.425415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.165 [2024-12-05 17:09:03.425422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:29.165 [2024-12-05 17:09:03.425428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:29.165 [2024-12-05 17:09:03.425434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.165 [2024-12-05 17:09:03.425472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.165 [2024-12-05 17:09:03.425479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:29.165 [2024-12-05 17:09:03.425486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:29.165 [2024-12-05 17:09:03.425493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.165 [2024-12-05 17:09:03.425506] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:29.165 [2024-12-05 17:09:03.428179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.166 [2024-12-05 17:09:03.428205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:29.166 [2024-12-05 17:09:03.428212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.676 ms 00:22:29.166 [2024-12-05 17:09:03.428218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.166 [2024-12-05 17:09:03.428245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.166 [2024-12-05 17:09:03.428252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:29.166 [2024-12-05 17:09:03.428258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:29.166 [2024-12-05 17:09:03.428264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.166 [2024-12-05 17:09:03.428277] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:29.166 [2024-12-05 17:09:03.428291] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:29.166 [2024-12-05 17:09:03.428320] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:29.166 [2024-12-05 17:09:03.428332] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:29.166 [2024-12-05 17:09:03.428410] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:29.166 [2024-12-05 17:09:03.428418] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:29.166 [2024-12-05 17:09:03.428426] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:29.166 [2024-12-05 17:09:03.428433] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:29.166 [2024-12-05 17:09:03.428440] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:29.166 [2024-12-05 17:09:03.428446] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:29.166 [2024-12-05 17:09:03.428452] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:29.166 [2024-12-05 17:09:03.428460] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:29.166 [2024-12-05 17:09:03.428465] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:29.166 [2024-12-05 17:09:03.428471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.166 [2024-12-05 17:09:03.428476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:29.166 [2024-12-05 17:09:03.428482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:22:29.166 [2024-12-05 17:09:03.428487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.166 [2024-12-05 17:09:03.428550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.166 [2024-12-05 17:09:03.428556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:29.166 [2024-12-05 17:09:03.428562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:29.166 [2024-12-05 17:09:03.428567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.166 [2024-12-05 17:09:03.428642] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:29.166 [2024-12-05 17:09:03.428650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:29.166 [2024-12-05 17:09:03.428656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:29.166 [2024-12-05 17:09:03.428662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.166 [2024-12-05 17:09:03.428668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:29.166 [2024-12-05 17:09:03.428673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:29.166 [2024-12-05 17:09:03.428678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:29.166 [2024-12-05 17:09:03.428683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:29.166 [2024-12-05 17:09:03.428704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:29.166 [2024-12-05 17:09:03.428709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:29.166 [2024-12-05 17:09:03.428715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:29.166 [2024-12-05 17:09:03.428720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:29.166 [2024-12-05 17:09:03.428726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:29.166 [2024-12-05 17:09:03.428735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:29.166 [2024-12-05 17:09:03.428740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:29.166 [2024-12-05 17:09:03.428745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.166 [2024-12-05 17:09:03.428750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:29.166 [2024-12-05 17:09:03.428756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:29.166 [2024-12-05 17:09:03.428760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.166 [2024-12-05 17:09:03.428766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:29.166 [2024-12-05 17:09:03.428772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:29.166 [2024-12-05 17:09:03.428777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.166 [2024-12-05 17:09:03.428782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:29.166 [2024-12-05 17:09:03.428787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:29.166 [2024-12-05 17:09:03.428792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.166 [2024-12-05 17:09:03.428797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:29.166 [2024-12-05 17:09:03.428802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:29.166 [2024-12-05 17:09:03.428807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.166 [2024-12-05 17:09:03.428811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:29.166 [2024-12-05 17:09:03.428816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:29.166 [2024-12-05 17:09:03.428821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.166 [2024-12-05 17:09:03.428826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:29.166 [2024-12-05 17:09:03.428831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:29.166 [2024-12-05 17:09:03.428836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:29.166 [2024-12-05 17:09:03.428841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:29.166 [2024-12-05 17:09:03.428846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:29.166 [2024-12-05 17:09:03.428850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:29.166 [2024-12-05 17:09:03.428855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:29.166 [2024-12-05 17:09:03.428860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:29.166 [2024-12-05 17:09:03.428864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.166 [2024-12-05 17:09:03.428870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:29.166 [2024-12-05 17:09:03.428874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:29.166 [2024-12-05 17:09:03.428879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.166 [2024-12-05 17:09:03.428884] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:29.166 [2024-12-05 17:09:03.428891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:29.166 [2024-12-05 17:09:03.428897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:29.166 [2024-12-05 17:09:03.428903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.166 [2024-12-05 17:09:03.428909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:29.166 [2024-12-05 17:09:03.428914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:29.166 [2024-12-05 17:09:03.428919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:29.166 [2024-12-05 17:09:03.428924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:29.166 [2024-12-05 17:09:03.428929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:29.166 [2024-12-05 17:09:03.428934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:29.166 [2024-12-05 17:09:03.428940] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:29.166 [2024-12-05 17:09:03.428957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:29.166 [2024-12-05 17:09:03.428965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:29.166 [2024-12-05 17:09:03.428970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:29.166 [2024-12-05 17:09:03.428975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:29.166 [2024-12-05 17:09:03.428981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:29.166 [2024-12-05 17:09:03.428986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:29.166 [2024-12-05 17:09:03.428991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:29.166 [2024-12-05 17:09:03.428999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:29.166 [2024-12-05 17:09:03.429004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:29.166 [2024-12-05 17:09:03.429009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:29.166 [2024-12-05 17:09:03.429015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:29.166 [2024-12-05 17:09:03.429020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:29.166 [2024-12-05 17:09:03.429026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:29.166 [2024-12-05 17:09:03.429032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:29.166 [2024-12-05 17:09:03.429037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:29.166 [2024-12-05 17:09:03.429042] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:29.166 [2024-12-05 17:09:03.429049] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:29.167 [2024-12-05 17:09:03.429055] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:29.167 [2024-12-05 17:09:03.429060] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:29.167 [2024-12-05 17:09:03.429065] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:29.167 [2024-12-05 17:09:03.429071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:29.167 [2024-12-05 17:09:03.429076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.167 [2024-12-05 17:09:03.429083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:29.167 [2024-12-05 17:09:03.429089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.486 ms 00:22:29.167 [2024-12-05 17:09:03.429094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.167 [2024-12-05 17:09:03.450049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.167 [2024-12-05 17:09:03.450079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:29.167 [2024-12-05 17:09:03.450091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.923 ms 00:22:29.167 [2024-12-05 17:09:03.450097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.167 [2024-12-05 17:09:03.450158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.167 [2024-12-05 17:09:03.450165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:29.167 [2024-12-05 17:09:03.450171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:22:29.167 [2024-12-05 17:09:03.450178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.167 [2024-12-05 17:09:03.501631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.167 [2024-12-05 17:09:03.501660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:29.167 [2024-12-05 17:09:03.501668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.416 ms 00:22:29.167 [2024-12-05 17:09:03.501674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.167 [2024-12-05 17:09:03.501698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.167 [2024-12-05 17:09:03.501707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:29.167 [2024-12-05 17:09:03.501713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:29.167 [2024-12-05 17:09:03.501718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.167 [2024-12-05 17:09:03.502048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.167 [2024-12-05 17:09:03.502060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:29.167 [2024-12-05 17:09:03.502067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:22:29.167 [2024-12-05 17:09:03.502073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.167 [2024-12-05 17:09:03.502168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.167 [2024-12-05 17:09:03.502179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:29.167 [2024-12-05 17:09:03.502185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:22:29.167 [2024-12-05 17:09:03.502191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.167 [2024-12-05 17:09:03.512664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.167 [2024-12-05 17:09:03.512706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:29.167 [2024-12-05 17:09:03.512714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.459 ms 00:22:29.167 [2024-12-05 17:09:03.512721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.167 [2024-12-05 17:09:03.522377] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:29.167 [2024-12-05 17:09:03.522406] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:29.167 [2024-12-05 17:09:03.522416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.167 [2024-12-05 17:09:03.522422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:29.167 [2024-12-05 17:09:03.522428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.624 ms 00:22:29.167 [2024-12-05 17:09:03.522434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.426 [2024-12-05 17:09:03.540898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.426 [2024-12-05 17:09:03.540927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:29.426 [2024-12-05 17:09:03.540935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.435 ms 00:22:29.426 [2024-12-05 17:09:03.540941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.426 [2024-12-05 17:09:03.549944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.427 [2024-12-05 17:09:03.549975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:29.427 [2024-12-05 17:09:03.549982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.958 ms 00:22:29.427 [2024-12-05 17:09:03.549988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.427 [2024-12-05 17:09:03.558896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.427 [2024-12-05 17:09:03.558921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:29.427 [2024-12-05 17:09:03.558929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.884 ms 00:22:29.427 [2024-12-05 17:09:03.558935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.427 [2024-12-05 17:09:03.559404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.427 [2024-12-05 17:09:03.559426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:29.427 [2024-12-05 17:09:03.559433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:22:29.427 [2024-12-05 17:09:03.559438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.427 [2024-12-05 17:09:03.603704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.427 [2024-12-05 17:09:03.603754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:29.427 [2024-12-05 17:09:03.603764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.252 ms 00:22:29.427 [2024-12-05 17:09:03.603771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.427 [2024-12-05 17:09:03.611443] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:29.427 [2024-12-05 17:09:03.613266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.427 [2024-12-05 17:09:03.613291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:29.427 [2024-12-05 17:09:03.613300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.462 ms 00:22:29.427 [2024-12-05 17:09:03.613307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.427 [2024-12-05 17:09:03.613373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.427 [2024-12-05 17:09:03.613382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:29.427 [2024-12-05 17:09:03.613392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:29.427 [2024-12-05 17:09:03.613398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.427 [2024-12-05 17:09:03.613440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.427 [2024-12-05 17:09:03.613448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:29.427 [2024-12-05 17:09:03.613455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:22:29.427 [2024-12-05 17:09:03.613462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.427 [2024-12-05 17:09:03.613477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.427 [2024-12-05 17:09:03.613484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:29.427 [2024-12-05 17:09:03.613491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:29.427 [2024-12-05 17:09:03.613499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.427 [2024-12-05 17:09:03.613525] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:29.427 [2024-12-05 17:09:03.613533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.427 [2024-12-05 17:09:03.613540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:29.427 [2024-12-05 17:09:03.613547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:29.427 [2024-12-05 17:09:03.613553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.427 [2024-12-05 17:09:03.631109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.427 [2024-12-05 17:09:03.631135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:29.427 [2024-12-05 17:09:03.631147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.543 ms 00:22:29.427 [2024-12-05 17:09:03.631154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.427 [2024-12-05 17:09:03.631205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.427 [2024-12-05 17:09:03.631212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:29.427 [2024-12-05 17:09:03.631219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:22:29.427 [2024-12-05 17:09:03.631224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.427 [2024-12-05 17:09:03.631936] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 221.930 ms, result 0 00:22:30.817  [2024-12-05T17:09:06.125Z] Copying: 21/1024 [MB] (21 MBps) [2024-12-05T17:09:07.067Z] Copying: 42/1024 [MB] (21 MBps) [2024-12-05T17:09:08.012Z] Copying: 60/1024 [MB] (17 MBps) [2024-12-05T17:09:08.957Z] Copying: 81/1024 [MB] (20 MBps) [2024-12-05T17:09:09.901Z] Copying: 99/1024 [MB] (18 MBps) [2024-12-05T17:09:10.845Z] Copying: 118/1024 [MB] (18 MBps) [2024-12-05T17:09:11.792Z] Copying: 133/1024 [MB] (14 MBps) [2024-12-05T17:09:13.180Z] Copying: 146/1024 [MB] (13 MBps) [2024-12-05T17:09:14.125Z] Copying: 164/1024 [MB] (18 MBps) [2024-12-05T17:09:15.070Z] Copying: 177/1024 [MB] (12 MBps) [2024-12-05T17:09:16.046Z] Copying: 193/1024 [MB] (15 MBps) [2024-12-05T17:09:17.018Z] Copying: 207/1024 [MB] (14 MBps) [2024-12-05T17:09:17.961Z] Copying: 219/1024 [MB] (12 MBps) [2024-12-05T17:09:18.903Z] Copying: 240/1024 [MB] (21 MBps) [2024-12-05T17:09:19.846Z] Copying: 254/1024 [MB] (13 MBps) [2024-12-05T17:09:20.790Z] Copying: 266/1024 [MB] (11 MBps) [2024-12-05T17:09:22.177Z] Copying: 284/1024 [MB] (18 MBps) [2024-12-05T17:09:23.120Z] Copying: 300/1024 [MB] (16 MBps) [2024-12-05T17:09:24.063Z] Copying: 319/1024 [MB] (18 MBps) [2024-12-05T17:09:25.005Z] Copying: 332/1024 [MB] (13 MBps) [2024-12-05T17:09:25.946Z] Copying: 349/1024 [MB] (17 MBps) [2024-12-05T17:09:26.891Z] Copying: 371/1024 [MB] (21 MBps) [2024-12-05T17:09:27.835Z] Copying: 384/1024 [MB] (13 MBps) [2024-12-05T17:09:28.779Z] Copying: 404/1024 [MB] (19 MBps) [2024-12-05T17:09:30.170Z] Copying: 426/1024 [MB] (21 MBps) [2024-12-05T17:09:31.115Z] Copying: 439/1024 [MB] (13 MBps) [2024-12-05T17:09:32.055Z] Copying: 450/1024 [MB] (11 MBps) [2024-12-05T17:09:32.998Z] Copying: 465/1024 [MB] (15 MBps) [2024-12-05T17:09:33.943Z] Copying: 479/1024 [MB] (13 MBps) [2024-12-05T17:09:34.889Z] Copying: 501/1024 [MB] (21 MBps) [2024-12-05T17:09:35.833Z] Copying: 519/1024 [MB] (18 MBps) [2024-12-05T17:09:36.775Z] Copying: 534/1024 [MB] (14 MBps) [2024-12-05T17:09:38.161Z] Copying: 548/1024 [MB] (14 MBps) [2024-12-05T17:09:39.106Z] Copying: 569/1024 [MB] (21 MBps) [2024-12-05T17:09:40.051Z] Copying: 587/1024 [MB] (17 MBps) [2024-12-05T17:09:40.994Z] Copying: 605/1024 [MB] (18 MBps) [2024-12-05T17:09:41.938Z] Copying: 620/1024 [MB] (15 MBps) [2024-12-05T17:09:42.884Z] Copying: 634/1024 [MB] (13 MBps) [2024-12-05T17:09:43.827Z] Copying: 656/1024 [MB] (22 MBps) [2024-12-05T17:09:44.815Z] Copying: 674/1024 [MB] (17 MBps) [2024-12-05T17:09:46.217Z] Copying: 693/1024 [MB] (19 MBps) [2024-12-05T17:09:46.786Z] Copying: 712/1024 [MB] (18 MBps) [2024-12-05T17:09:48.224Z] Copying: 728/1024 [MB] (16 MBps) [2024-12-05T17:09:48.796Z] Copying: 743/1024 [MB] (14 MBps) [2024-12-05T17:09:50.182Z] Copying: 761/1024 [MB] (17 MBps) [2024-12-05T17:09:51.140Z] Copying: 782/1024 [MB] (21 MBps) [2024-12-05T17:09:52.093Z] Copying: 810/1024 [MB] (27 MBps) [2024-12-05T17:09:53.037Z] Copying: 836/1024 [MB] (26 MBps) [2024-12-05T17:09:53.982Z] Copying: 853/1024 [MB] (16 MBps) [2024-12-05T17:09:54.927Z] Copying: 870/1024 [MB] (17 MBps) [2024-12-05T17:09:55.870Z] Copying: 888/1024 [MB] (17 MBps) [2024-12-05T17:09:56.814Z] Copying: 912/1024 [MB] (23 MBps) [2024-12-05T17:09:58.201Z] Copying: 922/1024 [MB] (10 MBps) [2024-12-05T17:09:58.775Z] Copying: 933/1024 [MB] (10 MBps) [2024-12-05T17:10:00.162Z] Copying: 945/1024 [MB] (12 MBps) [2024-12-05T17:10:01.107Z] Copying: 964/1024 [MB] (18 MBps) [2024-12-05T17:10:02.051Z] Copying: 983/1024 [MB] (19 MBps) [2024-12-05T17:10:02.995Z] Copying: 998/1024 [MB] (15 MBps) [2024-12-05T17:10:03.939Z] Copying: 1011/1024 [MB] (12 MBps) [2024-12-05T17:10:04.201Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-12-05 17:10:04.104009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.834 [2024-12-05 17:10:04.104096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:29.834 [2024-12-05 17:10:04.104114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:29.834 [2024-12-05 17:10:04.104123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.834 [2024-12-05 17:10:04.104148] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:29.834 [2024-12-05 17:10:04.108147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.834 [2024-12-05 17:10:04.108203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:29.834 [2024-12-05 17:10:04.108216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.979 ms 00:23:29.834 [2024-12-05 17:10:04.108225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.834 [2024-12-05 17:10:04.108469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.834 [2024-12-05 17:10:04.108480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:29.834 [2024-12-05 17:10:04.108490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:23:29.834 [2024-12-05 17:10:04.108498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.834 [2024-12-05 17:10:04.112390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.834 [2024-12-05 17:10:04.112422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:29.834 [2024-12-05 17:10:04.112441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.875 ms 00:23:29.834 [2024-12-05 17:10:04.112450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.834 [2024-12-05 17:10:04.118819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.834 [2024-12-05 17:10:04.118868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:29.834 [2024-12-05 17:10:04.118882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.342 ms 00:23:29.834 [2024-12-05 17:10:04.118891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.834 [2024-12-05 17:10:04.148409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.834 [2024-12-05 17:10:04.148461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:29.834 [2024-12-05 17:10:04.148476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.422 ms 00:23:29.834 [2024-12-05 17:10:04.148485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.834 [2024-12-05 17:10:04.166110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.834 [2024-12-05 17:10:04.166332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:29.834 [2024-12-05 17:10:04.166364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.574 ms 00:23:29.834 [2024-12-05 17:10:04.166383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.834 [2024-12-05 17:10:04.166566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.834 [2024-12-05 17:10:04.166582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:29.834 [2024-12-05 17:10:04.166592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:23:29.834 [2024-12-05 17:10:04.166601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.834 [2024-12-05 17:10:04.194105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.834 [2024-12-05 17:10:04.194307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:29.834 [2024-12-05 17:10:04.194337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.488 ms 00:23:29.834 [2024-12-05 17:10:04.194348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.097 [2024-12-05 17:10:04.220201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.097 [2024-12-05 17:10:04.220253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:30.097 [2024-12-05 17:10:04.220266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.712 ms 00:23:30.097 [2024-12-05 17:10:04.220274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.097 [2024-12-05 17:10:04.245507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.097 [2024-12-05 17:10:04.245706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:30.097 [2024-12-05 17:10:04.245734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.185 ms 00:23:30.097 [2024-12-05 17:10:04.245745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.097 [2024-12-05 17:10:04.270704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.097 [2024-12-05 17:10:04.270756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:30.097 [2024-12-05 17:10:04.270769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.754 ms 00:23:30.097 [2024-12-05 17:10:04.270776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.097 [2024-12-05 17:10:04.270821] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:30.097 [2024-12-05 17:10:04.270844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.270855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.270863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.270872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.270880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.270888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.270897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.270905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.270913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.270921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.270929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.270937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.270944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.270974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.270983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.270991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.270999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:30.097 [2024-12-05 17:10:04.271405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:30.098 [2024-12-05 17:10:04.271972] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:30.098 [2024-12-05 17:10:04.271987] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a74b0820-3859-4012-a4fc-e9af9488b070 00:23:30.098 [2024-12-05 17:10:04.272001] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:30.098 [2024-12-05 17:10:04.272014] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:30.098 [2024-12-05 17:10:04.272026] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:30.098 [2024-12-05 17:10:04.272040] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:30.098 [2024-12-05 17:10:04.272065] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:30.098 [2024-12-05 17:10:04.272080] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:30.098 [2024-12-05 17:10:04.272093] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:30.098 [2024-12-05 17:10:04.272105] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:30.098 [2024-12-05 17:10:04.272117] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:30.098 [2024-12-05 17:10:04.272130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.098 [2024-12-05 17:10:04.272151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:30.098 [2024-12-05 17:10:04.272170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.310 ms 00:23:30.098 [2024-12-05 17:10:04.272183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.098 [2024-12-05 17:10:04.285787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.098 [2024-12-05 17:10:04.285833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:30.098 [2024-12-05 17:10:04.285845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.563 ms 00:23:30.098 [2024-12-05 17:10:04.285853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.098 [2024-12-05 17:10:04.286357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:30.098 [2024-12-05 17:10:04.286415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:30.098 [2024-12-05 17:10:04.286432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:23:30.098 [2024-12-05 17:10:04.286446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.098 [2024-12-05 17:10:04.323229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.098 [2024-12-05 17:10:04.323281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:30.098 [2024-12-05 17:10:04.323294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.098 [2024-12-05 17:10:04.323304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.098 [2024-12-05 17:10:04.323368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.098 [2024-12-05 17:10:04.323382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:30.098 [2024-12-05 17:10:04.323392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.098 [2024-12-05 17:10:04.323400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.098 [2024-12-05 17:10:04.323489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.098 [2024-12-05 17:10:04.323501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:30.098 [2024-12-05 17:10:04.323510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.098 [2024-12-05 17:10:04.323520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.098 [2024-12-05 17:10:04.323537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.098 [2024-12-05 17:10:04.323547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:30.098 [2024-12-05 17:10:04.323560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.098 [2024-12-05 17:10:04.323568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.098 [2024-12-05 17:10:04.410024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.098 [2024-12-05 17:10:04.410083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:30.098 [2024-12-05 17:10:04.410097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.098 [2024-12-05 17:10:04.410106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.360 [2024-12-05 17:10:04.480317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.360 [2024-12-05 17:10:04.480375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:30.360 [2024-12-05 17:10:04.480396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.360 [2024-12-05 17:10:04.480405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.360 [2024-12-05 17:10:04.480464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.360 [2024-12-05 17:10:04.480474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:30.360 [2024-12-05 17:10:04.480483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.360 [2024-12-05 17:10:04.480492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.360 [2024-12-05 17:10:04.480553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.360 [2024-12-05 17:10:04.480564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:30.360 [2024-12-05 17:10:04.480573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.360 [2024-12-05 17:10:04.480585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.360 [2024-12-05 17:10:04.480687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.360 [2024-12-05 17:10:04.480713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:30.360 [2024-12-05 17:10:04.480723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.360 [2024-12-05 17:10:04.480731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.360 [2024-12-05 17:10:04.480768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.360 [2024-12-05 17:10:04.480778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:30.360 [2024-12-05 17:10:04.480787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.360 [2024-12-05 17:10:04.480795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.360 [2024-12-05 17:10:04.480841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.360 [2024-12-05 17:10:04.480851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:30.360 [2024-12-05 17:10:04.480861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.360 [2024-12-05 17:10:04.480869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.360 [2024-12-05 17:10:04.480914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:30.360 [2024-12-05 17:10:04.480925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:30.360 [2024-12-05 17:10:04.480933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:30.360 [2024-12-05 17:10:04.480944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:30.360 [2024-12-05 17:10:04.481118] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 377.075 ms, result 0 00:23:30.931 00:23:30.931 00:23:30.931 17:10:05 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:33.474 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:33.474 17:10:07 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:23:33.474 [2024-12-05 17:10:07.280632] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:23:33.474 [2024-12-05 17:10:07.280835] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78694 ] 00:23:33.474 [2024-12-05 17:10:07.434128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.474 [2024-12-05 17:10:07.547207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.474 [2024-12-05 17:10:07.839264] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:33.474 [2024-12-05 17:10:07.839613] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:33.735 [2024-12-05 17:10:07.997777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.735 [2024-12-05 17:10:07.998012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:33.736 [2024-12-05 17:10:07.998198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:33.736 [2024-12-05 17:10:07.998241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.736 [2024-12-05 17:10:07.998333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.736 [2024-12-05 17:10:07.998504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:33.736 [2024-12-05 17:10:07.998525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:23:33.736 [2024-12-05 17:10:07.998600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.736 [2024-12-05 17:10:07.998644] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:33.736 [2024-12-05 17:10:07.999386] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:33.736 [2024-12-05 17:10:07.999516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.736 [2024-12-05 17:10:07.999574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:33.736 [2024-12-05 17:10:07.999623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.879 ms 00:23:33.736 [2024-12-05 17:10:07.999646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.736 [2024-12-05 17:10:08.001752] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:33.736 [2024-12-05 17:10:08.015998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.736 [2024-12-05 17:10:08.016178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:33.736 [2024-12-05 17:10:08.016831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.249 ms 00:23:33.736 [2024-12-05 17:10:08.016882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.736 [2024-12-05 17:10:08.017066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.736 [2024-12-05 17:10:08.017101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:33.736 [2024-12-05 17:10:08.017127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:23:33.736 [2024-12-05 17:10:08.017148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.736 [2024-12-05 17:10:08.025041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.736 [2024-12-05 17:10:08.025182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:33.736 [2024-12-05 17:10:08.025237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.788 ms 00:23:33.736 [2024-12-05 17:10:08.025267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.736 [2024-12-05 17:10:08.025362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.736 [2024-12-05 17:10:08.025385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:33.736 [2024-12-05 17:10:08.025406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:23:33.736 [2024-12-05 17:10:08.025426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.736 [2024-12-05 17:10:08.025485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.736 [2024-12-05 17:10:08.025567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:33.736 [2024-12-05 17:10:08.025592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:33.736 [2024-12-05 17:10:08.025612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.736 [2024-12-05 17:10:08.025647] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:33.736 [2024-12-05 17:10:08.029641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.736 [2024-12-05 17:10:08.029682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:33.736 [2024-12-05 17:10:08.029696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.003 ms 00:23:33.736 [2024-12-05 17:10:08.029704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.736 [2024-12-05 17:10:08.029747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.736 [2024-12-05 17:10:08.029757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:33.736 [2024-12-05 17:10:08.029766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:33.736 [2024-12-05 17:10:08.029774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.736 [2024-12-05 17:10:08.029826] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:33.736 [2024-12-05 17:10:08.029853] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:33.736 [2024-12-05 17:10:08.029891] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:33.736 [2024-12-05 17:10:08.029910] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:33.736 [2024-12-05 17:10:08.030042] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:33.736 [2024-12-05 17:10:08.030055] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:33.736 [2024-12-05 17:10:08.030067] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:33.736 [2024-12-05 17:10:08.030078] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:33.736 [2024-12-05 17:10:08.030087] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:33.736 [2024-12-05 17:10:08.030096] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:33.736 [2024-12-05 17:10:08.030104] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:33.736 [2024-12-05 17:10:08.030115] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:33.736 [2024-12-05 17:10:08.030124] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:33.736 [2024-12-05 17:10:08.030132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.736 [2024-12-05 17:10:08.030139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:33.736 [2024-12-05 17:10:08.030147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:23:33.736 [2024-12-05 17:10:08.030154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.736 [2024-12-05 17:10:08.030237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.736 [2024-12-05 17:10:08.030246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:33.736 [2024-12-05 17:10:08.030255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:33.736 [2024-12-05 17:10:08.030262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.736 [2024-12-05 17:10:08.030369] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:33.736 [2024-12-05 17:10:08.030380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:33.736 [2024-12-05 17:10:08.030389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:33.736 [2024-12-05 17:10:08.030397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:33.736 [2024-12-05 17:10:08.030405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:33.736 [2024-12-05 17:10:08.030412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:33.736 [2024-12-05 17:10:08.030419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:33.736 [2024-12-05 17:10:08.030426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:33.736 [2024-12-05 17:10:08.030433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:33.736 [2024-12-05 17:10:08.030440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:33.736 [2024-12-05 17:10:08.030447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:33.736 [2024-12-05 17:10:08.030454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:33.736 [2024-12-05 17:10:08.030462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:33.736 [2024-12-05 17:10:08.030476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:33.736 [2024-12-05 17:10:08.030483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:33.736 [2024-12-05 17:10:08.030490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:33.736 [2024-12-05 17:10:08.030497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:33.736 [2024-12-05 17:10:08.030504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:33.736 [2024-12-05 17:10:08.030511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:33.736 [2024-12-05 17:10:08.030518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:33.736 [2024-12-05 17:10:08.030525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:33.736 [2024-12-05 17:10:08.030532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:33.736 [2024-12-05 17:10:08.030539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:33.736 [2024-12-05 17:10:08.030546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:33.736 [2024-12-05 17:10:08.030553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:33.736 [2024-12-05 17:10:08.030560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:33.736 [2024-12-05 17:10:08.030566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:33.736 [2024-12-05 17:10:08.030573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:33.736 [2024-12-05 17:10:08.030579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:33.736 [2024-12-05 17:10:08.030586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:33.736 [2024-12-05 17:10:08.030592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:33.736 [2024-12-05 17:10:08.030599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:33.736 [2024-12-05 17:10:08.030605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:33.736 [2024-12-05 17:10:08.030613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:33.736 [2024-12-05 17:10:08.030619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:33.736 [2024-12-05 17:10:08.030625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:33.736 [2024-12-05 17:10:08.030631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:33.736 [2024-12-05 17:10:08.030638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:33.736 [2024-12-05 17:10:08.030645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:33.736 [2024-12-05 17:10:08.030651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:33.737 [2024-12-05 17:10:08.030657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:33.737 [2024-12-05 17:10:08.030664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:33.737 [2024-12-05 17:10:08.030670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:33.737 [2024-12-05 17:10:08.030676] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:33.737 [2024-12-05 17:10:08.030686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:33.737 [2024-12-05 17:10:08.030693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:33.737 [2024-12-05 17:10:08.030701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:33.737 [2024-12-05 17:10:08.030709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:33.737 [2024-12-05 17:10:08.030716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:33.737 [2024-12-05 17:10:08.030722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:33.737 [2024-12-05 17:10:08.030729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:33.737 [2024-12-05 17:10:08.030736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:33.737 [2024-12-05 17:10:08.030743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:33.737 [2024-12-05 17:10:08.030751] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:33.737 [2024-12-05 17:10:08.030761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:33.737 [2024-12-05 17:10:08.030772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:33.737 [2024-12-05 17:10:08.030779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:33.737 [2024-12-05 17:10:08.030786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:33.737 [2024-12-05 17:10:08.030793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:33.737 [2024-12-05 17:10:08.030800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:33.737 [2024-12-05 17:10:08.030807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:33.737 [2024-12-05 17:10:08.030813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:33.737 [2024-12-05 17:10:08.030820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:33.737 [2024-12-05 17:10:08.030828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:33.737 [2024-12-05 17:10:08.030836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:33.737 [2024-12-05 17:10:08.030843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:33.737 [2024-12-05 17:10:08.030849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:33.737 [2024-12-05 17:10:08.030856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:33.737 [2024-12-05 17:10:08.030864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:33.737 [2024-12-05 17:10:08.030870] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:33.737 [2024-12-05 17:10:08.030879] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:33.737 [2024-12-05 17:10:08.030886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:33.737 [2024-12-05 17:10:08.030894] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:33.737 [2024-12-05 17:10:08.030901] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:33.737 [2024-12-05 17:10:08.030908] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:33.737 [2024-12-05 17:10:08.030915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.737 [2024-12-05 17:10:08.030925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:33.737 [2024-12-05 17:10:08.030933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:23:33.737 [2024-12-05 17:10:08.030940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.737 [2024-12-05 17:10:08.062151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.737 [2024-12-05 17:10:08.062325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:33.737 [2024-12-05 17:10:08.062343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.148 ms 00:23:33.737 [2024-12-05 17:10:08.062359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.737 [2024-12-05 17:10:08.062450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.737 [2024-12-05 17:10:08.062459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:33.737 [2024-12-05 17:10:08.062469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:33.737 [2024-12-05 17:10:08.062477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.999 [2024-12-05 17:10:08.110752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.999 [2024-12-05 17:10:08.110806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:33.999 [2024-12-05 17:10:08.110821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.215 ms 00:23:33.999 [2024-12-05 17:10:08.110830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.999 [2024-12-05 17:10:08.110879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.999 [2024-12-05 17:10:08.110890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:33.999 [2024-12-05 17:10:08.110903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:33.999 [2024-12-05 17:10:08.110911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.999 [2024-12-05 17:10:08.111526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.999 [2024-12-05 17:10:08.111550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:33.999 [2024-12-05 17:10:08.111561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.506 ms 00:23:33.999 [2024-12-05 17:10:08.111570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.999 [2024-12-05 17:10:08.111724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.999 [2024-12-05 17:10:08.111741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:33.999 [2024-12-05 17:10:08.111756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:23:33.999 [2024-12-05 17:10:08.111768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.999 [2024-12-05 17:10:08.127123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.999 [2024-12-05 17:10:08.127169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:33.999 [2024-12-05 17:10:08.127180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.336 ms 00:23:33.999 [2024-12-05 17:10:08.127188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.999 [2024-12-05 17:10:08.141287] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:33.999 [2024-12-05 17:10:08.141333] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:33.999 [2024-12-05 17:10:08.141346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.999 [2024-12-05 17:10:08.141354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:33.999 [2024-12-05 17:10:08.141363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.049 ms 00:23:33.999 [2024-12-05 17:10:08.141371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.999 [2024-12-05 17:10:08.166826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.999 [2024-12-05 17:10:08.166875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:33.999 [2024-12-05 17:10:08.166888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.402 ms 00:23:33.999 [2024-12-05 17:10:08.166896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.999 [2024-12-05 17:10:08.179719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.999 [2024-12-05 17:10:08.179763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:33.999 [2024-12-05 17:10:08.179774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.742 ms 00:23:33.999 [2024-12-05 17:10:08.179782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.999 [2024-12-05 17:10:08.192374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.999 [2024-12-05 17:10:08.192418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:33.999 [2024-12-05 17:10:08.192430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.548 ms 00:23:33.999 [2024-12-05 17:10:08.192437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.999 [2024-12-05 17:10:08.193140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.999 [2024-12-05 17:10:08.193165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:33.999 [2024-12-05 17:10:08.193179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:23:33.999 [2024-12-05 17:10:08.193187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.999 [2024-12-05 17:10:08.259734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.999 [2024-12-05 17:10:08.259793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:33.999 [2024-12-05 17:10:08.259815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.528 ms 00:23:33.999 [2024-12-05 17:10:08.259825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.999 [2024-12-05 17:10:08.271104] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:33.999 [2024-12-05 17:10:08.274399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.999 [2024-12-05 17:10:08.274441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:33.999 [2024-12-05 17:10:08.274453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.521 ms 00:23:33.999 [2024-12-05 17:10:08.274462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.999 [2024-12-05 17:10:08.274546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.999 [2024-12-05 17:10:08.274558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:33.999 [2024-12-05 17:10:08.274571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:33.999 [2024-12-05 17:10:08.274580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.999 [2024-12-05 17:10:08.274651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.999 [2024-12-05 17:10:08.274662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:33.999 [2024-12-05 17:10:08.274670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:33.999 [2024-12-05 17:10:08.274679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.999 [2024-12-05 17:10:08.274699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.999 [2024-12-05 17:10:08.274709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:33.999 [2024-12-05 17:10:08.274717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:33.999 [2024-12-05 17:10:08.274725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.999 [2024-12-05 17:10:08.274765] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:33.999 [2024-12-05 17:10:08.274777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.999 [2024-12-05 17:10:08.274785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:33.999 [2024-12-05 17:10:08.274793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:33.999 [2024-12-05 17:10:08.274801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.999 [2024-12-05 17:10:08.300348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.999 [2024-12-05 17:10:08.300394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:33.999 [2024-12-05 17:10:08.300414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.527 ms 00:23:33.999 [2024-12-05 17:10:08.300422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.999 [2024-12-05 17:10:08.300502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.999 [2024-12-05 17:10:08.300513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:33.999 [2024-12-05 17:10:08.300522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:23:33.999 [2024-12-05 17:10:08.300531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.999 [2024-12-05 17:10:08.301808] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 303.558 ms, result 0 00:23:35.382  [2024-12-05T17:10:10.320Z] Copying: 17/1024 [MB] (17 MBps) [2024-12-05T17:10:11.705Z] Copying: 34/1024 [MB] (17 MBps) [2024-12-05T17:10:12.645Z] Copying: 56/1024 [MB] (21 MBps) [2024-12-05T17:10:13.592Z] Copying: 67/1024 [MB] (11 MBps) [2024-12-05T17:10:14.605Z] Copying: 86/1024 [MB] (19 MBps) [2024-12-05T17:10:15.545Z] Copying: 104/1024 [MB] (18 MBps) [2024-12-05T17:10:16.484Z] Copying: 132/1024 [MB] (27 MBps) [2024-12-05T17:10:17.426Z] Copying: 148/1024 [MB] (15 MBps) [2024-12-05T17:10:18.370Z] Copying: 166/1024 [MB] (18 MBps) [2024-12-05T17:10:19.757Z] Copying: 179/1024 [MB] (13 MBps) [2024-12-05T17:10:20.329Z] Copying: 191/1024 [MB] (11 MBps) [2024-12-05T17:10:21.716Z] Copying: 207/1024 [MB] (15 MBps) [2024-12-05T17:10:22.659Z] Copying: 221/1024 [MB] (14 MBps) [2024-12-05T17:10:23.633Z] Copying: 234/1024 [MB] (12 MBps) [2024-12-05T17:10:24.577Z] Copying: 248/1024 [MB] (14 MBps) [2024-12-05T17:10:25.521Z] Copying: 258/1024 [MB] (10 MBps) [2024-12-05T17:10:26.463Z] Copying: 275120/1048576 [kB] (10240 kBps) [2024-12-05T17:10:27.407Z] Copying: 278/1024 [MB] (10 MBps) [2024-12-05T17:10:28.351Z] Copying: 290/1024 [MB] (11 MBps) [2024-12-05T17:10:29.736Z] Copying: 304/1024 [MB] (14 MBps) [2024-12-05T17:10:30.679Z] Copying: 332/1024 [MB] (27 MBps) [2024-12-05T17:10:31.624Z] Copying: 350/1024 [MB] (17 MBps) [2024-12-05T17:10:32.568Z] Copying: 362/1024 [MB] (12 MBps) [2024-12-05T17:10:33.514Z] Copying: 375/1024 [MB] (13 MBps) [2024-12-05T17:10:34.459Z] Copying: 404/1024 [MB] (29 MBps) [2024-12-05T17:10:35.404Z] Copying: 422/1024 [MB] (17 MBps) [2024-12-05T17:10:36.424Z] Copying: 440/1024 [MB] (17 MBps) [2024-12-05T17:10:37.412Z] Copying: 465/1024 [MB] (25 MBps) [2024-12-05T17:10:38.356Z] Copying: 492/1024 [MB] (26 MBps) [2024-12-05T17:10:39.744Z] Copying: 511/1024 [MB] (19 MBps) [2024-12-05T17:10:40.318Z] Copying: 531/1024 [MB] (19 MBps) [2024-12-05T17:10:41.705Z] Copying: 547/1024 [MB] (16 MBps) [2024-12-05T17:10:42.649Z] Copying: 565/1024 [MB] (17 MBps) [2024-12-05T17:10:43.592Z] Copying: 587/1024 [MB] (21 MBps) [2024-12-05T17:10:44.533Z] Copying: 612/1024 [MB] (25 MBps) [2024-12-05T17:10:45.475Z] Copying: 636/1024 [MB] (23 MBps) [2024-12-05T17:10:46.416Z] Copying: 658/1024 [MB] (21 MBps) [2024-12-05T17:10:47.356Z] Copying: 687/1024 [MB] (29 MBps) [2024-12-05T17:10:48.743Z] Copying: 708/1024 [MB] (20 MBps) [2024-12-05T17:10:49.326Z] Copying: 732/1024 [MB] (24 MBps) [2024-12-05T17:10:50.711Z] Copying: 756/1024 [MB] (24 MBps) [2024-12-05T17:10:51.656Z] Copying: 779/1024 [MB] (22 MBps) [2024-12-05T17:10:52.599Z] Copying: 807/1024 [MB] (28 MBps) [2024-12-05T17:10:53.543Z] Copying: 836/1024 [MB] (29 MBps) [2024-12-05T17:10:54.487Z] Copying: 865/1024 [MB] (28 MBps) [2024-12-05T17:10:55.433Z] Copying: 885/1024 [MB] (19 MBps) [2024-12-05T17:10:56.374Z] Copying: 895/1024 [MB] (10 MBps) [2024-12-05T17:10:57.320Z] Copying: 919/1024 [MB] (23 MBps) [2024-12-05T17:10:58.709Z] Copying: 937/1024 [MB] (18 MBps) [2024-12-05T17:10:59.655Z] Copying: 958/1024 [MB] (21 MBps) [2024-12-05T17:11:00.599Z] Copying: 976/1024 [MB] (17 MBps) [2024-12-05T17:11:01.543Z] Copying: 1004/1024 [MB] (27 MBps) [2024-12-05T17:11:02.487Z] Copying: 1015/1024 [MB] (11 MBps) [2024-12-05T17:11:02.748Z] Copying: 1048156/1048576 [kB] (8288 kBps) [2024-12-05T17:11:02.748Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-05 17:11:02.658986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.381 [2024-12-05 17:11:02.659058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:28.381 [2024-12-05 17:11:02.659085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:28.381 [2024-12-05 17:11:02.659094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.381 [2024-12-05 17:11:02.662983] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:28.381 [2024-12-05 17:11:02.666811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.381 [2024-12-05 17:11:02.667026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:28.381 [2024-12-05 17:11:02.667051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.774 ms 00:24:28.381 [2024-12-05 17:11:02.667060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.381 [2024-12-05 17:11:02.680336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.381 [2024-12-05 17:11:02.680384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:28.381 [2024-12-05 17:11:02.680399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.390 ms 00:24:28.381 [2024-12-05 17:11:02.680415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.381 [2024-12-05 17:11:02.704556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.381 [2024-12-05 17:11:02.704603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:28.381 [2024-12-05 17:11:02.704616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.123 ms 00:24:28.381 [2024-12-05 17:11:02.704625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.381 [2024-12-05 17:11:02.710855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.381 [2024-12-05 17:11:02.711058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:28.381 [2024-12-05 17:11:02.711079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.194 ms 00:24:28.381 [2024-12-05 17:11:02.711096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.382 [2024-12-05 17:11:02.737552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.382 [2024-12-05 17:11:02.737598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:28.382 [2024-12-05 17:11:02.737612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.394 ms 00:24:28.382 [2024-12-05 17:11:02.737620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.644 [2024-12-05 17:11:02.754492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.644 [2024-12-05 17:11:02.754534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:28.644 [2024-12-05 17:11:02.754547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.827 ms 00:24:28.644 [2024-12-05 17:11:02.754556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.644 [2024-12-05 17:11:02.908671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.644 [2024-12-05 17:11:02.908760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:28.644 [2024-12-05 17:11:02.908773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 154.065 ms 00:24:28.644 [2024-12-05 17:11:02.908781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.644 [2024-12-05 17:11:02.934669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.644 [2024-12-05 17:11:02.934712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:28.644 [2024-12-05 17:11:02.934724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.871 ms 00:24:28.644 [2024-12-05 17:11:02.934732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.644 [2024-12-05 17:11:02.960237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.644 [2024-12-05 17:11:02.960415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:28.644 [2024-12-05 17:11:02.960436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.463 ms 00:24:28.644 [2024-12-05 17:11:02.960444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.644 [2024-12-05 17:11:02.984889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.644 [2024-12-05 17:11:02.985080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:28.644 [2024-12-05 17:11:02.985153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.372 ms 00:24:28.644 [2024-12-05 17:11:02.985177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.644 [2024-12-05 17:11:03.009576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.907 [2024-12-05 17:11:03.009735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:28.908 [2024-12-05 17:11:03.009753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.324 ms 00:24:28.908 [2024-12-05 17:11:03.009761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.908 [2024-12-05 17:11:03.009829] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:28.908 [2024-12-05 17:11:03.009846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 93696 / 261120 wr_cnt: 1 state: open 00:24:28.908 [2024-12-05 17:11:03.009858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.009866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.009875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.009883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.009891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.009899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.009907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.009915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.009923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.009931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.009940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.009969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.009977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.009985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.009993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:28.908 [2024-12-05 17:11:03.010367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:28.909 [2024-12-05 17:11:03.010682] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:28.909 [2024-12-05 17:11:03.010691] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a74b0820-3859-4012-a4fc-e9af9488b070 00:24:28.909 [2024-12-05 17:11:03.010699] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 93696 00:24:28.909 [2024-12-05 17:11:03.010707] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 94656 00:24:28.909 [2024-12-05 17:11:03.010714] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 93696 00:24:28.909 [2024-12-05 17:11:03.010722] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0102 00:24:28.909 [2024-12-05 17:11:03.010739] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:28.909 [2024-12-05 17:11:03.010747] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:28.909 [2024-12-05 17:11:03.010755] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:28.909 [2024-12-05 17:11:03.010761] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:28.909 [2024-12-05 17:11:03.010769] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:28.909 [2024-12-05 17:11:03.010776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.909 [2024-12-05 17:11:03.010785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:28.909 [2024-12-05 17:11:03.010794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.948 ms 00:24:28.909 [2024-12-05 17:11:03.010801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.909 [2024-12-05 17:11:03.024413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.909 [2024-12-05 17:11:03.024559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:28.909 [2024-12-05 17:11:03.024623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.593 ms 00:24:28.909 [2024-12-05 17:11:03.024645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.909 [2024-12-05 17:11:03.025101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:28.909 [2024-12-05 17:11:03.025150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:28.909 [2024-12-05 17:11:03.025221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:24:28.909 [2024-12-05 17:11:03.025243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.909 [2024-12-05 17:11:03.061868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.909 [2024-12-05 17:11:03.062059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:28.909 [2024-12-05 17:11:03.062123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.909 [2024-12-05 17:11:03.062149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.909 [2024-12-05 17:11:03.062237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.909 [2024-12-05 17:11:03.062261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:28.909 [2024-12-05 17:11:03.062281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.909 [2024-12-05 17:11:03.062300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.909 [2024-12-05 17:11:03.062405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.909 [2024-12-05 17:11:03.062435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:28.909 [2024-12-05 17:11:03.062457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.909 [2024-12-05 17:11:03.062520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.909 [2024-12-05 17:11:03.062554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.909 [2024-12-05 17:11:03.062576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:28.909 [2024-12-05 17:11:03.062664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.909 [2024-12-05 17:11:03.062688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.909 [2024-12-05 17:11:03.147414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.909 [2024-12-05 17:11:03.147587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:28.909 [2024-12-05 17:11:03.147649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.909 [2024-12-05 17:11:03.147672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.909 [2024-12-05 17:11:03.216938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.909 [2024-12-05 17:11:03.217141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:28.909 [2024-12-05 17:11:03.217198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.909 [2024-12-05 17:11:03.217223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.909 [2024-12-05 17:11:03.217300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.909 [2024-12-05 17:11:03.217324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:28.909 [2024-12-05 17:11:03.217345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.909 [2024-12-05 17:11:03.217372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.909 [2024-12-05 17:11:03.217445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.909 [2024-12-05 17:11:03.217514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:28.909 [2024-12-05 17:11:03.217538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.909 [2024-12-05 17:11:03.217557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.909 [2024-12-05 17:11:03.217685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.909 [2024-12-05 17:11:03.217712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:28.909 [2024-12-05 17:11:03.217732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.909 [2024-12-05 17:11:03.217828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.909 [2024-12-05 17:11:03.217877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.909 [2024-12-05 17:11:03.217902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:28.909 [2024-12-05 17:11:03.217989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.909 [2024-12-05 17:11:03.218018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.910 [2024-12-05 17:11:03.218074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.910 [2024-12-05 17:11:03.218096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:28.910 [2024-12-05 17:11:03.218117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.910 [2024-12-05 17:11:03.218173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.910 [2024-12-05 17:11:03.218244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:28.910 [2024-12-05 17:11:03.218270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:28.910 [2024-12-05 17:11:03.218335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:28.910 [2024-12-05 17:11:03.218359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:28.910 [2024-12-05 17:11:03.218519] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 560.271 ms, result 0 00:24:30.826 00:24:30.826 00:24:30.826 17:11:04 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:30.826 [2024-12-05 17:11:04.832080] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:24:30.826 [2024-12-05 17:11:04.832230] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79279 ] 00:24:30.826 [2024-12-05 17:11:04.995299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.826 [2024-12-05 17:11:05.115910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.087 [2024-12-05 17:11:05.413617] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:31.087 [2024-12-05 17:11:05.413702] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:31.348 [2024-12-05 17:11:05.574567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.348 [2024-12-05 17:11:05.574634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:31.348 [2024-12-05 17:11:05.574649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:31.348 [2024-12-05 17:11:05.574658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.348 [2024-12-05 17:11:05.574713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.348 [2024-12-05 17:11:05.574727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:31.348 [2024-12-05 17:11:05.574736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:24:31.348 [2024-12-05 17:11:05.574744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.348 [2024-12-05 17:11:05.574764] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:31.348 [2024-12-05 17:11:05.575879] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:31.348 [2024-12-05 17:11:05.575942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.348 [2024-12-05 17:11:05.575968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:31.348 [2024-12-05 17:11:05.575979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.182 ms 00:24:31.348 [2024-12-05 17:11:05.575987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.348 [2024-12-05 17:11:05.577688] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:31.348 [2024-12-05 17:11:05.592255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.348 [2024-12-05 17:11:05.592297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:31.348 [2024-12-05 17:11:05.592311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.568 ms 00:24:31.348 [2024-12-05 17:11:05.592319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.348 [2024-12-05 17:11:05.592403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.348 [2024-12-05 17:11:05.592413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:31.348 [2024-12-05 17:11:05.592423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:31.348 [2024-12-05 17:11:05.592431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.348 [2024-12-05 17:11:05.600554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.348 [2024-12-05 17:11:05.600590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:31.348 [2024-12-05 17:11:05.600600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.043 ms 00:24:31.348 [2024-12-05 17:11:05.600614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.348 [2024-12-05 17:11:05.600719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.348 [2024-12-05 17:11:05.600729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:31.349 [2024-12-05 17:11:05.600738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:24:31.349 [2024-12-05 17:11:05.600746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.349 [2024-12-05 17:11:05.600792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.349 [2024-12-05 17:11:05.600803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:31.349 [2024-12-05 17:11:05.600811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:31.349 [2024-12-05 17:11:05.600819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.349 [2024-12-05 17:11:05.600847] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:31.349 [2024-12-05 17:11:05.604797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.349 [2024-12-05 17:11:05.604830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:31.349 [2024-12-05 17:11:05.604843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.956 ms 00:24:31.349 [2024-12-05 17:11:05.604851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.349 [2024-12-05 17:11:05.604888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.349 [2024-12-05 17:11:05.604897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:31.349 [2024-12-05 17:11:05.604906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:31.349 [2024-12-05 17:11:05.604914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.349 [2024-12-05 17:11:05.604978] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:31.349 [2024-12-05 17:11:05.605006] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:31.349 [2024-12-05 17:11:05.605043] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:31.349 [2024-12-05 17:11:05.605061] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:31.349 [2024-12-05 17:11:05.605169] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:31.349 [2024-12-05 17:11:05.605180] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:31.349 [2024-12-05 17:11:05.605192] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:31.349 [2024-12-05 17:11:05.605202] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:31.349 [2024-12-05 17:11:05.605212] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:31.349 [2024-12-05 17:11:05.605222] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:31.349 [2024-12-05 17:11:05.605229] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:31.349 [2024-12-05 17:11:05.605240] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:31.349 [2024-12-05 17:11:05.605247] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:31.349 [2024-12-05 17:11:05.605256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.349 [2024-12-05 17:11:05.605264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:31.349 [2024-12-05 17:11:05.605273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:24:31.349 [2024-12-05 17:11:05.605281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.349 [2024-12-05 17:11:05.605367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.349 [2024-12-05 17:11:05.605377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:31.349 [2024-12-05 17:11:05.605385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:31.349 [2024-12-05 17:11:05.605393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.349 [2024-12-05 17:11:05.605499] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:31.349 [2024-12-05 17:11:05.605511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:31.349 [2024-12-05 17:11:05.605520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:31.349 [2024-12-05 17:11:05.605529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:31.349 [2024-12-05 17:11:05.605537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:31.349 [2024-12-05 17:11:05.605544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:31.349 [2024-12-05 17:11:05.605550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:31.349 [2024-12-05 17:11:05.605558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:31.349 [2024-12-05 17:11:05.605565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:31.349 [2024-12-05 17:11:05.605572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:31.349 [2024-12-05 17:11:05.605580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:31.349 [2024-12-05 17:11:05.605587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:31.349 [2024-12-05 17:11:05.605594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:31.349 [2024-12-05 17:11:05.605608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:31.349 [2024-12-05 17:11:05.605617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:31.349 [2024-12-05 17:11:05.605624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:31.349 [2024-12-05 17:11:05.605631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:31.349 [2024-12-05 17:11:05.605638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:31.349 [2024-12-05 17:11:05.605645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:31.349 [2024-12-05 17:11:05.605652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:31.349 [2024-12-05 17:11:05.605658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:31.349 [2024-12-05 17:11:05.605665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:31.349 [2024-12-05 17:11:05.605672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:31.349 [2024-12-05 17:11:05.605679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:31.349 [2024-12-05 17:11:05.605685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:31.349 [2024-12-05 17:11:05.605691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:31.349 [2024-12-05 17:11:05.605698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:31.349 [2024-12-05 17:11:05.605704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:31.349 [2024-12-05 17:11:05.605711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:31.349 [2024-12-05 17:11:05.605718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:31.349 [2024-12-05 17:11:05.605725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:31.349 [2024-12-05 17:11:05.605731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:31.349 [2024-12-05 17:11:05.605737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:31.349 [2024-12-05 17:11:05.605744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:31.349 [2024-12-05 17:11:05.605750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:31.349 [2024-12-05 17:11:05.605757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:31.349 [2024-12-05 17:11:05.605762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:31.349 [2024-12-05 17:11:05.605769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:31.349 [2024-12-05 17:11:05.605776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:31.349 [2024-12-05 17:11:05.605782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:31.349 [2024-12-05 17:11:05.605789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:31.349 [2024-12-05 17:11:05.605795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:31.349 [2024-12-05 17:11:05.605802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:31.349 [2024-12-05 17:11:05.605809] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:31.349 [2024-12-05 17:11:05.605816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:31.349 [2024-12-05 17:11:05.605824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:31.349 [2024-12-05 17:11:05.605835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:31.349 [2024-12-05 17:11:05.605843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:31.349 [2024-12-05 17:11:05.605850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:31.349 [2024-12-05 17:11:05.605857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:31.349 [2024-12-05 17:11:05.605864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:31.349 [2024-12-05 17:11:05.605870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:31.349 [2024-12-05 17:11:05.605877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:31.349 [2024-12-05 17:11:05.605887] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:31.349 [2024-12-05 17:11:05.605896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:31.349 [2024-12-05 17:11:05.605907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:31.349 [2024-12-05 17:11:05.605914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:31.349 [2024-12-05 17:11:05.605921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:31.349 [2024-12-05 17:11:05.605929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:31.349 [2024-12-05 17:11:05.605936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:31.349 [2024-12-05 17:11:05.605942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:31.349 [2024-12-05 17:11:05.605963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:31.349 [2024-12-05 17:11:05.605971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:31.349 [2024-12-05 17:11:05.605978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:31.350 [2024-12-05 17:11:05.605986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:31.350 [2024-12-05 17:11:05.605994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:31.350 [2024-12-05 17:11:05.606001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:31.350 [2024-12-05 17:11:05.606008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:31.350 [2024-12-05 17:11:05.606015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:31.350 [2024-12-05 17:11:05.606023] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:31.350 [2024-12-05 17:11:05.606031] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:31.350 [2024-12-05 17:11:05.606040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:31.350 [2024-12-05 17:11:05.606047] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:31.350 [2024-12-05 17:11:05.606055] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:31.350 [2024-12-05 17:11:05.606063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:31.350 [2024-12-05 17:11:05.606070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.350 [2024-12-05 17:11:05.606078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:31.350 [2024-12-05 17:11:05.606087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:24:31.350 [2024-12-05 17:11:05.606096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.350 [2024-12-05 17:11:05.637800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.350 [2024-12-05 17:11:05.637842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:31.350 [2024-12-05 17:11:05.637853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.655 ms 00:24:31.350 [2024-12-05 17:11:05.637866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.350 [2024-12-05 17:11:05.637977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.350 [2024-12-05 17:11:05.637986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:31.350 [2024-12-05 17:11:05.637995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:24:31.350 [2024-12-05 17:11:05.638004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.350 [2024-12-05 17:11:05.681481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.350 [2024-12-05 17:11:05.681526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:31.350 [2024-12-05 17:11:05.681540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.414 ms 00:24:31.350 [2024-12-05 17:11:05.681549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.350 [2024-12-05 17:11:05.681597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.350 [2024-12-05 17:11:05.681607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:31.350 [2024-12-05 17:11:05.681620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:31.350 [2024-12-05 17:11:05.681628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.350 [2024-12-05 17:11:05.682270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.350 [2024-12-05 17:11:05.682304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:31.350 [2024-12-05 17:11:05.682316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:24:31.350 [2024-12-05 17:11:05.682324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.350 [2024-12-05 17:11:05.682489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.350 [2024-12-05 17:11:05.682499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:31.350 [2024-12-05 17:11:05.682513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:24:31.350 [2024-12-05 17:11:05.682521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.350 [2024-12-05 17:11:05.698149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.350 [2024-12-05 17:11:05.698191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:31.350 [2024-12-05 17:11:05.698202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.608 ms 00:24:31.350 [2024-12-05 17:11:05.698211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.350 [2024-12-05 17:11:05.712500] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:31.350 [2024-12-05 17:11:05.712543] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:31.350 [2024-12-05 17:11:05.712556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.350 [2024-12-05 17:11:05.712565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:31.350 [2024-12-05 17:11:05.712575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.236 ms 00:24:31.350 [2024-12-05 17:11:05.712582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.611 [2024-12-05 17:11:05.739132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.611 [2024-12-05 17:11:05.739183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:31.611 [2024-12-05 17:11:05.739196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.496 ms 00:24:31.611 [2024-12-05 17:11:05.739203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.611 [2024-12-05 17:11:05.752045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.611 [2024-12-05 17:11:05.752175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:31.611 [2024-12-05 17:11:05.752187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.789 ms 00:24:31.611 [2024-12-05 17:11:05.752194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.611 [2024-12-05 17:11:05.764866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.611 [2024-12-05 17:11:05.764913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:31.611 [2024-12-05 17:11:05.764924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.623 ms 00:24:31.611 [2024-12-05 17:11:05.764931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.611 [2024-12-05 17:11:05.765577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.611 [2024-12-05 17:11:05.765608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:31.611 [2024-12-05 17:11:05.765623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:24:31.611 [2024-12-05 17:11:05.765630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.611 [2024-12-05 17:11:05.833356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.611 [2024-12-05 17:11:05.833417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:31.611 [2024-12-05 17:11:05.833439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.706 ms 00:24:31.611 [2024-12-05 17:11:05.833449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.611 [2024-12-05 17:11:05.844638] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:31.611 [2024-12-05 17:11:05.847750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.611 [2024-12-05 17:11:05.847794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:31.611 [2024-12-05 17:11:05.847807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.243 ms 00:24:31.611 [2024-12-05 17:11:05.847815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.611 [2024-12-05 17:11:05.847903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.611 [2024-12-05 17:11:05.847914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:31.611 [2024-12-05 17:11:05.847926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:24:31.611 [2024-12-05 17:11:05.847934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.611 [2024-12-05 17:11:05.849587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.611 [2024-12-05 17:11:05.849634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:31.611 [2024-12-05 17:11:05.849645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.597 ms 00:24:31.611 [2024-12-05 17:11:05.849652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.611 [2024-12-05 17:11:05.849687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.611 [2024-12-05 17:11:05.849696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:31.611 [2024-12-05 17:11:05.849705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:31.611 [2024-12-05 17:11:05.849718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.611 [2024-12-05 17:11:05.849753] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:31.611 [2024-12-05 17:11:05.849764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.611 [2024-12-05 17:11:05.849773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:31.611 [2024-12-05 17:11:05.849782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:31.611 [2024-12-05 17:11:05.849793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.611 [2024-12-05 17:11:05.875929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.611 [2024-12-05 17:11:05.875988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:31.611 [2024-12-05 17:11:05.876008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.116 ms 00:24:31.611 [2024-12-05 17:11:05.876017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.611 [2024-12-05 17:11:05.876103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.611 [2024-12-05 17:11:05.876113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:31.611 [2024-12-05 17:11:05.876123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:31.611 [2024-12-05 17:11:05.876131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.611 [2024-12-05 17:11:05.877397] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 302.320 ms, result 0 00:24:33.000  [2024-12-05T17:11:08.312Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-05T17:11:09.259Z] Copying: 34/1024 [MB] (21 MBps) [2024-12-05T17:11:10.204Z] Copying: 57/1024 [MB] (23 MBps) [2024-12-05T17:11:11.148Z] Copying: 81/1024 [MB] (23 MBps) [2024-12-05T17:11:12.094Z] Copying: 108/1024 [MB] (26 MBps) [2024-12-05T17:11:13.484Z] Copying: 126/1024 [MB] (18 MBps) [2024-12-05T17:11:14.427Z] Copying: 148/1024 [MB] (21 MBps) [2024-12-05T17:11:15.369Z] Copying: 167/1024 [MB] (19 MBps) [2024-12-05T17:11:16.353Z] Copying: 185/1024 [MB] (17 MBps) [2024-12-05T17:11:17.326Z] Copying: 197/1024 [MB] (12 MBps) [2024-12-05T17:11:18.271Z] Copying: 215/1024 [MB] (17 MBps) [2024-12-05T17:11:19.217Z] Copying: 232/1024 [MB] (17 MBps) [2024-12-05T17:11:20.164Z] Copying: 245/1024 [MB] (13 MBps) [2024-12-05T17:11:21.118Z] Copying: 265/1024 [MB] (19 MBps) [2024-12-05T17:11:22.506Z] Copying: 287/1024 [MB] (22 MBps) [2024-12-05T17:11:23.095Z] Copying: 307/1024 [MB] (19 MBps) [2024-12-05T17:11:24.483Z] Copying: 329/1024 [MB] (21 MBps) [2024-12-05T17:11:25.427Z] Copying: 354/1024 [MB] (25 MBps) [2024-12-05T17:11:26.370Z] Copying: 377/1024 [MB] (22 MBps) [2024-12-05T17:11:27.315Z] Copying: 395/1024 [MB] (17 MBps) [2024-12-05T17:11:28.257Z] Copying: 414/1024 [MB] (19 MBps) [2024-12-05T17:11:29.203Z] Copying: 426/1024 [MB] (11 MBps) [2024-12-05T17:11:30.145Z] Copying: 444/1024 [MB] (18 MBps) [2024-12-05T17:11:31.089Z] Copying: 456/1024 [MB] (11 MBps) [2024-12-05T17:11:32.476Z] Copying: 473/1024 [MB] (16 MBps) [2024-12-05T17:11:33.419Z] Copying: 484/1024 [MB] (11 MBps) [2024-12-05T17:11:34.363Z] Copying: 499/1024 [MB] (14 MBps) [2024-12-05T17:11:35.302Z] Copying: 516/1024 [MB] (16 MBps) [2024-12-05T17:11:36.243Z] Copying: 532/1024 [MB] (16 MBps) [2024-12-05T17:11:37.187Z] Copying: 552/1024 [MB] (19 MBps) [2024-12-05T17:11:38.133Z] Copying: 568/1024 [MB] (15 MBps) [2024-12-05T17:11:39.077Z] Copying: 585/1024 [MB] (17 MBps) [2024-12-05T17:11:40.465Z] Copying: 598/1024 [MB] (12 MBps) [2024-12-05T17:11:41.408Z] Copying: 611/1024 [MB] (13 MBps) [2024-12-05T17:11:42.352Z] Copying: 625/1024 [MB] (13 MBps) [2024-12-05T17:11:43.295Z] Copying: 635/1024 [MB] (10 MBps) [2024-12-05T17:11:44.236Z] Copying: 645/1024 [MB] (10 MBps) [2024-12-05T17:11:45.275Z] Copying: 656/1024 [MB] (10 MBps) [2024-12-05T17:11:46.223Z] Copying: 667/1024 [MB] (10 MBps) [2024-12-05T17:11:47.167Z] Copying: 686/1024 [MB] (19 MBps) [2024-12-05T17:11:48.110Z] Copying: 710/1024 [MB] (23 MBps) [2024-12-05T17:11:49.498Z] Copying: 734/1024 [MB] (24 MBps) [2024-12-05T17:11:50.073Z] Copying: 755/1024 [MB] (20 MBps) [2024-12-05T17:11:51.464Z] Copying: 776/1024 [MB] (20 MBps) [2024-12-05T17:11:52.409Z] Copying: 796/1024 [MB] (20 MBps) [2024-12-05T17:11:53.354Z] Copying: 818/1024 [MB] (22 MBps) [2024-12-05T17:11:54.300Z] Copying: 839/1024 [MB] (21 MBps) [2024-12-05T17:11:55.249Z] Copying: 851/1024 [MB] (11 MBps) [2024-12-05T17:11:56.193Z] Copying: 874/1024 [MB] (22 MBps) [2024-12-05T17:11:57.138Z] Copying: 890/1024 [MB] (16 MBps) [2024-12-05T17:11:58.081Z] Copying: 910/1024 [MB] (20 MBps) [2024-12-05T17:11:59.470Z] Copying: 932/1024 [MB] (21 MBps) [2024-12-05T17:12:00.424Z] Copying: 954/1024 [MB] (22 MBps) [2024-12-05T17:12:01.368Z] Copying: 973/1024 [MB] (19 MBps) [2024-12-05T17:12:02.311Z] Copying: 991/1024 [MB] (17 MBps) [2024-12-05T17:12:02.884Z] Copying: 1011/1024 [MB] (19 MBps) [2024-12-05T17:12:03.457Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-05 17:12:03.204719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.090 [2024-12-05 17:12:03.204813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:29.090 [2024-12-05 17:12:03.204851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:29.090 [2024-12-05 17:12:03.204861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.090 [2024-12-05 17:12:03.204887] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:29.090 [2024-12-05 17:12:03.207984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.090 [2024-12-05 17:12:03.208036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:29.090 [2024-12-05 17:12:03.208048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.080 ms 00:25:29.090 [2024-12-05 17:12:03.208058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.090 [2024-12-05 17:12:03.208296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.090 [2024-12-05 17:12:03.208316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:29.090 [2024-12-05 17:12:03.208333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:25:29.090 [2024-12-05 17:12:03.208342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.090 [2024-12-05 17:12:03.214822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.090 [2024-12-05 17:12:03.214891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:29.090 [2024-12-05 17:12:03.214903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.461 ms 00:25:29.090 [2024-12-05 17:12:03.214911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.091 [2024-12-05 17:12:03.221372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.091 [2024-12-05 17:12:03.221414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:29.091 [2024-12-05 17:12:03.221426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.403 ms 00:25:29.091 [2024-12-05 17:12:03.221442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.091 [2024-12-05 17:12:03.249197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.091 [2024-12-05 17:12:03.249255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:29.091 [2024-12-05 17:12:03.249269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.708 ms 00:25:29.091 [2024-12-05 17:12:03.249278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.091 [2024-12-05 17:12:03.267273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.091 [2024-12-05 17:12:03.267323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:29.091 [2024-12-05 17:12:03.267336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.944 ms 00:25:29.091 [2024-12-05 17:12:03.267345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.353 [2024-12-05 17:12:03.478409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.353 [2024-12-05 17:12:03.478471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:29.353 [2024-12-05 17:12:03.478487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 211.009 ms 00:25:29.353 [2024-12-05 17:12:03.478496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.354 [2024-12-05 17:12:03.505630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.354 [2024-12-05 17:12:03.505683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:29.354 [2024-12-05 17:12:03.505697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.108 ms 00:25:29.354 [2024-12-05 17:12:03.505706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.354 [2024-12-05 17:12:03.531607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.354 [2024-12-05 17:12:03.531653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:29.354 [2024-12-05 17:12:03.531667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.853 ms 00:25:29.354 [2024-12-05 17:12:03.531676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.354 [2024-12-05 17:12:03.556907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.354 [2024-12-05 17:12:03.556964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:29.354 [2024-12-05 17:12:03.556977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.183 ms 00:25:29.354 [2024-12-05 17:12:03.556985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.354 [2024-12-05 17:12:03.582067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.354 [2024-12-05 17:12:03.582118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:29.354 [2024-12-05 17:12:03.582131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.988 ms 00:25:29.354 [2024-12-05 17:12:03.582139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.354 [2024-12-05 17:12:03.582185] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:29.354 [2024-12-05 17:12:03.582202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:25:29.354 [2024-12-05 17:12:03.582214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:29.354 [2024-12-05 17:12:03.582723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.582996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:29.355 [2024-12-05 17:12:03.583013] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:29.355 [2024-12-05 17:12:03.583022] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a74b0820-3859-4012-a4fc-e9af9488b070 00:25:29.355 [2024-12-05 17:12:03.583031] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:25:29.355 [2024-12-05 17:12:03.583040] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 38336 00:25:29.355 [2024-12-05 17:12:03.583048] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 37376 00:25:29.355 [2024-12-05 17:12:03.583066] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0257 00:25:29.355 [2024-12-05 17:12:03.583075] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:29.355 [2024-12-05 17:12:03.583092] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:29.355 [2024-12-05 17:12:03.583099] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:29.355 [2024-12-05 17:12:03.583107] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:29.355 [2024-12-05 17:12:03.583113] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:29.355 [2024-12-05 17:12:03.583121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.355 [2024-12-05 17:12:03.583130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:29.355 [2024-12-05 17:12:03.583139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.938 ms 00:25:29.355 [2024-12-05 17:12:03.583147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.355 [2024-12-05 17:12:03.597099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.355 [2024-12-05 17:12:03.597152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:29.355 [2024-12-05 17:12:03.597164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.931 ms 00:25:29.355 [2024-12-05 17:12:03.597172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.355 [2024-12-05 17:12:03.597583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.355 [2024-12-05 17:12:03.597604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:29.355 [2024-12-05 17:12:03.597615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:25:29.355 [2024-12-05 17:12:03.597623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.355 [2024-12-05 17:12:03.634494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.355 [2024-12-05 17:12:03.634550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:29.355 [2024-12-05 17:12:03.634563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.355 [2024-12-05 17:12:03.634573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.355 [2024-12-05 17:12:03.634647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.355 [2024-12-05 17:12:03.634657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:29.355 [2024-12-05 17:12:03.634667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.355 [2024-12-05 17:12:03.634677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.355 [2024-12-05 17:12:03.634770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.355 [2024-12-05 17:12:03.634786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:29.355 [2024-12-05 17:12:03.634795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.355 [2024-12-05 17:12:03.634804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.355 [2024-12-05 17:12:03.634822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.355 [2024-12-05 17:12:03.634831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:29.355 [2024-12-05 17:12:03.634838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.355 [2024-12-05 17:12:03.634846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.617 [2024-12-05 17:12:03.720657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.617 [2024-12-05 17:12:03.720732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:29.617 [2024-12-05 17:12:03.720745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.617 [2024-12-05 17:12:03.720754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.617 [2024-12-05 17:12:03.790788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.617 [2024-12-05 17:12:03.790855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:29.617 [2024-12-05 17:12:03.790867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.617 [2024-12-05 17:12:03.790876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.617 [2024-12-05 17:12:03.790940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.617 [2024-12-05 17:12:03.790972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:29.617 [2024-12-05 17:12:03.790989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.617 [2024-12-05 17:12:03.790998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.617 [2024-12-05 17:12:03.791060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.617 [2024-12-05 17:12:03.791071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:29.617 [2024-12-05 17:12:03.791081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.617 [2024-12-05 17:12:03.791089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.617 [2024-12-05 17:12:03.791192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.617 [2024-12-05 17:12:03.791203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:29.617 [2024-12-05 17:12:03.791212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.617 [2024-12-05 17:12:03.791222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.617 [2024-12-05 17:12:03.791254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.617 [2024-12-05 17:12:03.791264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:29.617 [2024-12-05 17:12:03.791273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.617 [2024-12-05 17:12:03.791280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.617 [2024-12-05 17:12:03.791323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.617 [2024-12-05 17:12:03.791332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:29.617 [2024-12-05 17:12:03.791341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.617 [2024-12-05 17:12:03.791353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.617 [2024-12-05 17:12:03.791401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.617 [2024-12-05 17:12:03.791412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:29.617 [2024-12-05 17:12:03.791422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.617 [2024-12-05 17:12:03.791430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.617 [2024-12-05 17:12:03.791568] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 586.824 ms, result 0 00:25:30.190 00:25:30.190 00:25:30.452 17:12:04 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:32.995 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:32.995 17:12:06 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:32.995 17:12:06 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:32.995 17:12:06 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:32.995 17:12:06 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:32.995 17:12:06 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:32.995 17:12:06 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77229 00:25:32.995 17:12:06 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77229 ']' 00:25:32.995 Process with pid 77229 is not found 00:25:32.995 17:12:06 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77229 00:25:32.995 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77229) - No such process 00:25:32.995 17:12:06 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77229 is not found' 00:25:32.995 17:12:06 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:32.995 Remove shared memory files 00:25:32.995 17:12:06 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:32.995 17:12:06 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:32.995 17:12:06 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:32.995 17:12:06 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:32.995 17:12:06 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:32.995 17:12:06 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:32.995 00:25:32.995 real 4m19.639s 00:25:32.995 user 4m7.109s 00:25:32.995 sys 0m12.328s 00:25:32.995 17:12:06 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:32.995 ************************************ 00:25:32.995 END TEST ftl_restore 00:25:32.995 ************************************ 00:25:32.995 17:12:06 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:32.995 17:12:06 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:32.995 17:12:06 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:32.995 17:12:06 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:32.995 17:12:06 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:32.995 ************************************ 00:25:32.995 START TEST ftl_dirty_shutdown 00:25:32.995 ************************************ 00:25:32.995 17:12:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:32.995 * Looking for test storage... 00:25:32.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:32.995 17:12:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:32.995 17:12:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:32.995 17:12:06 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:25:32.995 17:12:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:32.995 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:32.995 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:32.995 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:32.995 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:32.995 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:32.995 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:32.995 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:32.995 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:32.995 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:32.995 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:32.995 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:32.995 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:32.995 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:25:32.995 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:32.995 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:32.995 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:32.995 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:32.995 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:32.995 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:32.995 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:32.995 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:32.995 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:32.995 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:32.995 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:32.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.996 --rc genhtml_branch_coverage=1 00:25:32.996 --rc genhtml_function_coverage=1 00:25:32.996 --rc genhtml_legend=1 00:25:32.996 --rc geninfo_all_blocks=1 00:25:32.996 --rc geninfo_unexecuted_blocks=1 00:25:32.996 00:25:32.996 ' 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:32.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.996 --rc genhtml_branch_coverage=1 00:25:32.996 --rc genhtml_function_coverage=1 00:25:32.996 --rc genhtml_legend=1 00:25:32.996 --rc geninfo_all_blocks=1 00:25:32.996 --rc geninfo_unexecuted_blocks=1 00:25:32.996 00:25:32.996 ' 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:32.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.996 --rc genhtml_branch_coverage=1 00:25:32.996 --rc genhtml_function_coverage=1 00:25:32.996 --rc genhtml_legend=1 00:25:32.996 --rc geninfo_all_blocks=1 00:25:32.996 --rc geninfo_unexecuted_blocks=1 00:25:32.996 00:25:32.996 ' 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:32.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.996 --rc genhtml_branch_coverage=1 00:25:32.996 --rc genhtml_function_coverage=1 00:25:32.996 --rc genhtml_legend=1 00:25:32.996 --rc geninfo_all_blocks=1 00:25:32.996 --rc geninfo_unexecuted_blocks=1 00:25:32.996 00:25:32.996 ' 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=79983 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 79983 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 79983 ']' 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:32.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:32.996 17:12:07 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:32.996 [2024-12-05 17:12:07.164791] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:25:32.996 [2024-12-05 17:12:07.164907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79983 ] 00:25:32.996 [2024-12-05 17:12:07.326700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.257 [2024-12-05 17:12:07.428610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.830 17:12:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:33.830 17:12:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:33.830 17:12:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:33.830 17:12:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:33.830 17:12:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:33.830 17:12:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:33.830 17:12:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:33.830 17:12:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:34.092 17:12:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:34.092 17:12:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:34.092 17:12:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:34.092 17:12:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:34.092 17:12:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:34.092 17:12:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:34.092 17:12:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:34.092 17:12:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:34.354 17:12:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:34.354 { 00:25:34.354 "name": "nvme0n1", 00:25:34.354 "aliases": [ 00:25:34.354 "2779877c-e289-4c14-b134-0a7344388630" 00:25:34.354 ], 00:25:34.354 "product_name": "NVMe disk", 00:25:34.354 "block_size": 4096, 00:25:34.354 "num_blocks": 1310720, 00:25:34.354 "uuid": "2779877c-e289-4c14-b134-0a7344388630", 00:25:34.354 "numa_id": -1, 00:25:34.354 "assigned_rate_limits": { 00:25:34.354 "rw_ios_per_sec": 0, 00:25:34.354 "rw_mbytes_per_sec": 0, 00:25:34.354 "r_mbytes_per_sec": 0, 00:25:34.354 "w_mbytes_per_sec": 0 00:25:34.354 }, 00:25:34.354 "claimed": true, 00:25:34.354 "claim_type": "read_many_write_one", 00:25:34.354 "zoned": false, 00:25:34.354 "supported_io_types": { 00:25:34.354 "read": true, 00:25:34.354 "write": true, 00:25:34.354 "unmap": true, 00:25:34.354 "flush": true, 00:25:34.354 "reset": true, 00:25:34.354 "nvme_admin": true, 00:25:34.354 "nvme_io": true, 00:25:34.354 "nvme_io_md": false, 00:25:34.354 "write_zeroes": true, 00:25:34.354 "zcopy": false, 00:25:34.354 "get_zone_info": false, 00:25:34.354 "zone_management": false, 00:25:34.354 "zone_append": false, 00:25:34.354 "compare": true, 00:25:34.354 "compare_and_write": false, 00:25:34.354 "abort": true, 00:25:34.354 "seek_hole": false, 00:25:34.354 "seek_data": false, 00:25:34.354 "copy": true, 00:25:34.354 "nvme_iov_md": false 00:25:34.354 }, 00:25:34.354 "driver_specific": { 00:25:34.354 "nvme": [ 00:25:34.354 { 00:25:34.354 "pci_address": "0000:00:11.0", 00:25:34.354 "trid": { 00:25:34.354 "trtype": "PCIe", 00:25:34.354 "traddr": "0000:00:11.0" 00:25:34.354 }, 00:25:34.354 "ctrlr_data": { 00:25:34.354 "cntlid": 0, 00:25:34.354 "vendor_id": "0x1b36", 00:25:34.354 "model_number": "QEMU NVMe Ctrl", 00:25:34.354 "serial_number": "12341", 00:25:34.354 "firmware_revision": "8.0.0", 00:25:34.354 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:34.354 "oacs": { 00:25:34.354 "security": 0, 00:25:34.354 "format": 1, 00:25:34.354 "firmware": 0, 00:25:34.354 "ns_manage": 1 00:25:34.354 }, 00:25:34.354 "multi_ctrlr": false, 00:25:34.354 "ana_reporting": false 00:25:34.354 }, 00:25:34.354 "vs": { 00:25:34.354 "nvme_version": "1.4" 00:25:34.354 }, 00:25:34.354 "ns_data": { 00:25:34.354 "id": 1, 00:25:34.354 "can_share": false 00:25:34.354 } 00:25:34.354 } 00:25:34.354 ], 00:25:34.354 "mp_policy": "active_passive" 00:25:34.354 } 00:25:34.354 } 00:25:34.354 ]' 00:25:34.354 17:12:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:34.354 17:12:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:34.354 17:12:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:34.354 17:12:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:34.354 17:12:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:34.354 17:12:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:25:34.354 17:12:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:34.354 17:12:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:34.354 17:12:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:34.354 17:12:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:34.354 17:12:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:34.616 17:12:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=dfc5814f-f9b5-41e9-8a7b-30043cb80add 00:25:34.616 17:12:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:34.616 17:12:08 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dfc5814f-f9b5-41e9-8a7b-30043cb80add 00:25:34.877 17:12:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:35.139 17:12:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=ab05872b-b525-4194-a772-7457ea4b2232 00:25:35.139 17:12:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u ab05872b-b525-4194-a772-7457ea4b2232 00:25:35.401 17:12:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=273c963d-883d-4994-85c6-d4d56e8a2b7b 00:25:35.401 17:12:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:35.401 17:12:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 273c963d-883d-4994-85c6-d4d56e8a2b7b 00:25:35.401 17:12:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:35.401 17:12:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:35.401 17:12:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=273c963d-883d-4994-85c6-d4d56e8a2b7b 00:25:35.401 17:12:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:35.401 17:12:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 273c963d-883d-4994-85c6-d4d56e8a2b7b 00:25:35.401 17:12:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=273c963d-883d-4994-85c6-d4d56e8a2b7b 00:25:35.401 17:12:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:35.401 17:12:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:35.401 17:12:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:35.401 17:12:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 273c963d-883d-4994-85c6-d4d56e8a2b7b 00:25:35.663 17:12:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:35.663 { 00:25:35.663 "name": "273c963d-883d-4994-85c6-d4d56e8a2b7b", 00:25:35.663 "aliases": [ 00:25:35.663 "lvs/nvme0n1p0" 00:25:35.663 ], 00:25:35.663 "product_name": "Logical Volume", 00:25:35.663 "block_size": 4096, 00:25:35.663 "num_blocks": 26476544, 00:25:35.663 "uuid": "273c963d-883d-4994-85c6-d4d56e8a2b7b", 00:25:35.663 "assigned_rate_limits": { 00:25:35.663 "rw_ios_per_sec": 0, 00:25:35.663 "rw_mbytes_per_sec": 0, 00:25:35.663 "r_mbytes_per_sec": 0, 00:25:35.663 "w_mbytes_per_sec": 0 00:25:35.663 }, 00:25:35.663 "claimed": false, 00:25:35.663 "zoned": false, 00:25:35.663 "supported_io_types": { 00:25:35.663 "read": true, 00:25:35.663 "write": true, 00:25:35.663 "unmap": true, 00:25:35.663 "flush": false, 00:25:35.663 "reset": true, 00:25:35.663 "nvme_admin": false, 00:25:35.663 "nvme_io": false, 00:25:35.663 "nvme_io_md": false, 00:25:35.663 "write_zeroes": true, 00:25:35.663 "zcopy": false, 00:25:35.663 "get_zone_info": false, 00:25:35.663 "zone_management": false, 00:25:35.663 "zone_append": false, 00:25:35.663 "compare": false, 00:25:35.663 "compare_and_write": false, 00:25:35.663 "abort": false, 00:25:35.663 "seek_hole": true, 00:25:35.663 "seek_data": true, 00:25:35.663 "copy": false, 00:25:35.663 "nvme_iov_md": false 00:25:35.663 }, 00:25:35.663 "driver_specific": { 00:25:35.663 "lvol": { 00:25:35.663 "lvol_store_uuid": "ab05872b-b525-4194-a772-7457ea4b2232", 00:25:35.663 "base_bdev": "nvme0n1", 00:25:35.663 "thin_provision": true, 00:25:35.663 "num_allocated_clusters": 0, 00:25:35.663 "snapshot": false, 00:25:35.663 "clone": false, 00:25:35.663 "esnap_clone": false 00:25:35.663 } 00:25:35.663 } 00:25:35.663 } 00:25:35.663 ]' 00:25:35.663 17:12:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:35.663 17:12:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:35.663 17:12:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:35.663 17:12:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:35.663 17:12:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:35.663 17:12:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:35.663 17:12:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:35.663 17:12:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:35.663 17:12:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:35.925 17:12:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:35.925 17:12:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:35.925 17:12:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 273c963d-883d-4994-85c6-d4d56e8a2b7b 00:25:35.925 17:12:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=273c963d-883d-4994-85c6-d4d56e8a2b7b 00:25:35.925 17:12:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:35.925 17:12:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:35.925 17:12:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:35.925 17:12:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 273c963d-883d-4994-85c6-d4d56e8a2b7b 00:25:36.187 17:12:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:36.187 { 00:25:36.187 "name": "273c963d-883d-4994-85c6-d4d56e8a2b7b", 00:25:36.187 "aliases": [ 00:25:36.187 "lvs/nvme0n1p0" 00:25:36.187 ], 00:25:36.187 "product_name": "Logical Volume", 00:25:36.187 "block_size": 4096, 00:25:36.187 "num_blocks": 26476544, 00:25:36.187 "uuid": "273c963d-883d-4994-85c6-d4d56e8a2b7b", 00:25:36.187 "assigned_rate_limits": { 00:25:36.187 "rw_ios_per_sec": 0, 00:25:36.187 "rw_mbytes_per_sec": 0, 00:25:36.187 "r_mbytes_per_sec": 0, 00:25:36.187 "w_mbytes_per_sec": 0 00:25:36.187 }, 00:25:36.187 "claimed": false, 00:25:36.187 "zoned": false, 00:25:36.187 "supported_io_types": { 00:25:36.187 "read": true, 00:25:36.187 "write": true, 00:25:36.187 "unmap": true, 00:25:36.187 "flush": false, 00:25:36.187 "reset": true, 00:25:36.187 "nvme_admin": false, 00:25:36.187 "nvme_io": false, 00:25:36.187 "nvme_io_md": false, 00:25:36.187 "write_zeroes": true, 00:25:36.187 "zcopy": false, 00:25:36.187 "get_zone_info": false, 00:25:36.187 "zone_management": false, 00:25:36.187 "zone_append": false, 00:25:36.187 "compare": false, 00:25:36.187 "compare_and_write": false, 00:25:36.187 "abort": false, 00:25:36.187 "seek_hole": true, 00:25:36.187 "seek_data": true, 00:25:36.187 "copy": false, 00:25:36.187 "nvme_iov_md": false 00:25:36.187 }, 00:25:36.187 "driver_specific": { 00:25:36.187 "lvol": { 00:25:36.187 "lvol_store_uuid": "ab05872b-b525-4194-a772-7457ea4b2232", 00:25:36.187 "base_bdev": "nvme0n1", 00:25:36.187 "thin_provision": true, 00:25:36.187 "num_allocated_clusters": 0, 00:25:36.187 "snapshot": false, 00:25:36.187 "clone": false, 00:25:36.187 "esnap_clone": false 00:25:36.187 } 00:25:36.187 } 00:25:36.187 } 00:25:36.187 ]' 00:25:36.187 17:12:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:36.187 17:12:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:36.187 17:12:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:36.187 17:12:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:36.187 17:12:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:36.187 17:12:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:36.187 17:12:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:25:36.187 17:12:10 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:36.448 17:12:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:25:36.448 17:12:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 273c963d-883d-4994-85c6-d4d56e8a2b7b 00:25:36.449 17:12:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=273c963d-883d-4994-85c6-d4d56e8a2b7b 00:25:36.449 17:12:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:36.449 17:12:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:36.449 17:12:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:36.449 17:12:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 273c963d-883d-4994-85c6-d4d56e8a2b7b 00:25:36.710 17:12:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:36.710 { 00:25:36.710 "name": "273c963d-883d-4994-85c6-d4d56e8a2b7b", 00:25:36.710 "aliases": [ 00:25:36.710 "lvs/nvme0n1p0" 00:25:36.710 ], 00:25:36.710 "product_name": "Logical Volume", 00:25:36.711 "block_size": 4096, 00:25:36.711 "num_blocks": 26476544, 00:25:36.711 "uuid": "273c963d-883d-4994-85c6-d4d56e8a2b7b", 00:25:36.711 "assigned_rate_limits": { 00:25:36.711 "rw_ios_per_sec": 0, 00:25:36.711 "rw_mbytes_per_sec": 0, 00:25:36.711 "r_mbytes_per_sec": 0, 00:25:36.711 "w_mbytes_per_sec": 0 00:25:36.711 }, 00:25:36.711 "claimed": false, 00:25:36.711 "zoned": false, 00:25:36.711 "supported_io_types": { 00:25:36.711 "read": true, 00:25:36.711 "write": true, 00:25:36.711 "unmap": true, 00:25:36.711 "flush": false, 00:25:36.711 "reset": true, 00:25:36.711 "nvme_admin": false, 00:25:36.711 "nvme_io": false, 00:25:36.711 "nvme_io_md": false, 00:25:36.711 "write_zeroes": true, 00:25:36.711 "zcopy": false, 00:25:36.711 "get_zone_info": false, 00:25:36.711 "zone_management": false, 00:25:36.711 "zone_append": false, 00:25:36.711 "compare": false, 00:25:36.711 "compare_and_write": false, 00:25:36.711 "abort": false, 00:25:36.711 "seek_hole": true, 00:25:36.711 "seek_data": true, 00:25:36.711 "copy": false, 00:25:36.711 "nvme_iov_md": false 00:25:36.711 }, 00:25:36.711 "driver_specific": { 00:25:36.711 "lvol": { 00:25:36.711 "lvol_store_uuid": "ab05872b-b525-4194-a772-7457ea4b2232", 00:25:36.711 "base_bdev": "nvme0n1", 00:25:36.711 "thin_provision": true, 00:25:36.711 "num_allocated_clusters": 0, 00:25:36.711 "snapshot": false, 00:25:36.711 "clone": false, 00:25:36.711 "esnap_clone": false 00:25:36.711 } 00:25:36.711 } 00:25:36.711 } 00:25:36.711 ]' 00:25:36.711 17:12:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:36.711 17:12:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:36.711 17:12:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:36.711 17:12:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:36.711 17:12:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:36.711 17:12:10 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:36.711 17:12:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:25:36.711 17:12:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 273c963d-883d-4994-85c6-d4d56e8a2b7b --l2p_dram_limit 10' 00:25:36.711 17:12:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:25:36.711 17:12:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:25:36.711 17:12:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:36.711 17:12:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 273c963d-883d-4994-85c6-d4d56e8a2b7b --l2p_dram_limit 10 -c nvc0n1p0 00:25:36.975 [2024-12-05 17:12:11.091817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.975 [2024-12-05 17:12:11.091858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:36.975 [2024-12-05 17:12:11.091871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:36.975 [2024-12-05 17:12:11.091878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.975 [2024-12-05 17:12:11.091921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.975 [2024-12-05 17:12:11.091929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:36.975 [2024-12-05 17:12:11.091937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:36.975 [2024-12-05 17:12:11.091943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.975 [2024-12-05 17:12:11.091972] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:36.975 [2024-12-05 17:12:11.092524] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:36.975 [2024-12-05 17:12:11.092545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.975 [2024-12-05 17:12:11.092552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:36.975 [2024-12-05 17:12:11.092560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:25:36.975 [2024-12-05 17:12:11.092566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.975 [2024-12-05 17:12:11.092590] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f6e51171-fa49-40ed-b714-ebb2439e8ed1 00:25:36.975 [2024-12-05 17:12:11.093536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.975 [2024-12-05 17:12:11.093563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:36.975 [2024-12-05 17:12:11.093571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:25:36.975 [2024-12-05 17:12:11.093578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.975 [2024-12-05 17:12:11.098318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.975 [2024-12-05 17:12:11.098350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:36.975 [2024-12-05 17:12:11.098359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.684 ms 00:25:36.975 [2024-12-05 17:12:11.098366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.975 [2024-12-05 17:12:11.098432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.975 [2024-12-05 17:12:11.098441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:36.975 [2024-12-05 17:12:11.098447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:36.975 [2024-12-05 17:12:11.098456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.975 [2024-12-05 17:12:11.098490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.975 [2024-12-05 17:12:11.098499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:36.975 [2024-12-05 17:12:11.098506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:36.975 [2024-12-05 17:12:11.098513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.975 [2024-12-05 17:12:11.098529] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:36.975 [2024-12-05 17:12:11.101427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.975 [2024-12-05 17:12:11.101453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:36.975 [2024-12-05 17:12:11.101463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.900 ms 00:25:36.975 [2024-12-05 17:12:11.101469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.975 [2024-12-05 17:12:11.101496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.975 [2024-12-05 17:12:11.101503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:36.975 [2024-12-05 17:12:11.101510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:36.975 [2024-12-05 17:12:11.101516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.975 [2024-12-05 17:12:11.101535] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:36.975 [2024-12-05 17:12:11.101644] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:36.975 [2024-12-05 17:12:11.101661] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:36.975 [2024-12-05 17:12:11.101670] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:36.975 [2024-12-05 17:12:11.101679] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:36.975 [2024-12-05 17:12:11.101685] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:36.975 [2024-12-05 17:12:11.101693] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:36.975 [2024-12-05 17:12:11.101698] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:36.975 [2024-12-05 17:12:11.101708] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:36.975 [2024-12-05 17:12:11.101713] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:36.975 [2024-12-05 17:12:11.101720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.975 [2024-12-05 17:12:11.101730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:36.975 [2024-12-05 17:12:11.101737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:25:36.975 [2024-12-05 17:12:11.101743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.975 [2024-12-05 17:12:11.101808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.975 [2024-12-05 17:12:11.101819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:36.975 [2024-12-05 17:12:11.101826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:36.975 [2024-12-05 17:12:11.101832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.975 [2024-12-05 17:12:11.101910] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:36.975 [2024-12-05 17:12:11.101922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:36.975 [2024-12-05 17:12:11.101930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:36.975 [2024-12-05 17:12:11.101936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.975 [2024-12-05 17:12:11.101944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:36.975 [2024-12-05 17:12:11.101958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:36.975 [2024-12-05 17:12:11.101965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:36.975 [2024-12-05 17:12:11.101970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:36.975 [2024-12-05 17:12:11.101977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:36.975 [2024-12-05 17:12:11.101982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:36.975 [2024-12-05 17:12:11.101990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:36.975 [2024-12-05 17:12:11.101995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:36.975 [2024-12-05 17:12:11.102001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:36.975 [2024-12-05 17:12:11.102006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:36.975 [2024-12-05 17:12:11.102013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:36.975 [2024-12-05 17:12:11.102018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.975 [2024-12-05 17:12:11.102025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:36.975 [2024-12-05 17:12:11.102030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:36.975 [2024-12-05 17:12:11.102036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.975 [2024-12-05 17:12:11.102041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:36.975 [2024-12-05 17:12:11.102048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:36.975 [2024-12-05 17:12:11.102053] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.975 [2024-12-05 17:12:11.102059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:36.975 [2024-12-05 17:12:11.102065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:36.975 [2024-12-05 17:12:11.102071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.975 [2024-12-05 17:12:11.102076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:36.975 [2024-12-05 17:12:11.102082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:36.975 [2024-12-05 17:12:11.102087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.975 [2024-12-05 17:12:11.102094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:36.975 [2024-12-05 17:12:11.102099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:36.975 [2024-12-05 17:12:11.102106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.975 [2024-12-05 17:12:11.102111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:36.975 [2024-12-05 17:12:11.102118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:36.975 [2024-12-05 17:12:11.102124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:36.975 [2024-12-05 17:12:11.102130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:36.976 [2024-12-05 17:12:11.102135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:36.976 [2024-12-05 17:12:11.102142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:36.976 [2024-12-05 17:12:11.102147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:36.976 [2024-12-05 17:12:11.102154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:36.976 [2024-12-05 17:12:11.102159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.976 [2024-12-05 17:12:11.102165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:36.976 [2024-12-05 17:12:11.102170] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:36.976 [2024-12-05 17:12:11.102175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.976 [2024-12-05 17:12:11.102180] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:36.976 [2024-12-05 17:12:11.102187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:36.976 [2024-12-05 17:12:11.102193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:36.976 [2024-12-05 17:12:11.102199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.976 [2024-12-05 17:12:11.102205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:36.976 [2024-12-05 17:12:11.102212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:36.976 [2024-12-05 17:12:11.102217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:36.976 [2024-12-05 17:12:11.102224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:36.976 [2024-12-05 17:12:11.102229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:36.976 [2024-12-05 17:12:11.102235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:36.976 [2024-12-05 17:12:11.102241] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:36.976 [2024-12-05 17:12:11.102250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:36.976 [2024-12-05 17:12:11.102258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:36.976 [2024-12-05 17:12:11.102266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:36.976 [2024-12-05 17:12:11.102272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:36.976 [2024-12-05 17:12:11.102279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:36.976 [2024-12-05 17:12:11.102284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:36.976 [2024-12-05 17:12:11.102291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:36.976 [2024-12-05 17:12:11.102297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:36.976 [2024-12-05 17:12:11.102305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:36.976 [2024-12-05 17:12:11.102310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:36.976 [2024-12-05 17:12:11.102318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:36.976 [2024-12-05 17:12:11.102324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:36.976 [2024-12-05 17:12:11.102330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:36.976 [2024-12-05 17:12:11.102336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:36.976 [2024-12-05 17:12:11.102343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:36.976 [2024-12-05 17:12:11.102348] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:36.976 [2024-12-05 17:12:11.102356] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:36.976 [2024-12-05 17:12:11.102362] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:36.976 [2024-12-05 17:12:11.102369] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:36.976 [2024-12-05 17:12:11.102374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:36.976 [2024-12-05 17:12:11.102381] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:36.976 [2024-12-05 17:12:11.102386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.976 [2024-12-05 17:12:11.102393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:36.976 [2024-12-05 17:12:11.102399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:25:36.976 [2024-12-05 17:12:11.102406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.976 [2024-12-05 17:12:11.102444] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:36.976 [2024-12-05 17:12:11.102455] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:42.315 [2024-12-05 17:12:15.715978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.315 [2024-12-05 17:12:15.716060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:42.315 [2024-12-05 17:12:15.716078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4613.515 ms 00:25:42.315 [2024-12-05 17:12:15.716089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.315 [2024-12-05 17:12:15.747546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.315 [2024-12-05 17:12:15.747612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:42.315 [2024-12-05 17:12:15.747628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.134 ms 00:25:42.315 [2024-12-05 17:12:15.747639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.315 [2024-12-05 17:12:15.747784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.315 [2024-12-05 17:12:15.747799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:42.315 [2024-12-05 17:12:15.747808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:42.315 [2024-12-05 17:12:15.747825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.315 [2024-12-05 17:12:15.782915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.315 [2024-12-05 17:12:15.782980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:42.315 [2024-12-05 17:12:15.782993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.052 ms 00:25:42.315 [2024-12-05 17:12:15.783004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.315 [2024-12-05 17:12:15.783038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.315 [2024-12-05 17:12:15.783054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:42.315 [2024-12-05 17:12:15.783063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:42.315 [2024-12-05 17:12:15.783082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.315 [2024-12-05 17:12:15.783653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.315 [2024-12-05 17:12:15.783696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:42.315 [2024-12-05 17:12:15.783707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:25:42.315 [2024-12-05 17:12:15.783717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.315 [2024-12-05 17:12:15.783832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.315 [2024-12-05 17:12:15.783843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:42.315 [2024-12-05 17:12:15.783856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:25:42.315 [2024-12-05 17:12:15.783869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.315 [2024-12-05 17:12:15.801015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.315 [2024-12-05 17:12:15.801062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:42.315 [2024-12-05 17:12:15.801074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.128 ms 00:25:42.315 [2024-12-05 17:12:15.801084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.315 [2024-12-05 17:12:15.828151] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:42.315 [2024-12-05 17:12:15.832457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.315 [2024-12-05 17:12:15.832509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:42.315 [2024-12-05 17:12:15.832527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.284 ms 00:25:42.315 [2024-12-05 17:12:15.832539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.315 [2024-12-05 17:12:15.934725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.315 [2024-12-05 17:12:15.934777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:42.315 [2024-12-05 17:12:15.934795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.134 ms 00:25:42.315 [2024-12-05 17:12:15.934804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.315 [2024-12-05 17:12:15.935025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.315 [2024-12-05 17:12:15.935040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:42.315 [2024-12-05 17:12:15.935056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:25:42.315 [2024-12-05 17:12:15.935063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.315 [2024-12-05 17:12:15.961133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.316 [2024-12-05 17:12:15.961183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:42.316 [2024-12-05 17:12:15.961199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.011 ms 00:25:42.316 [2024-12-05 17:12:15.961208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.316 [2024-12-05 17:12:15.986084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.316 [2024-12-05 17:12:15.986130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:42.316 [2024-12-05 17:12:15.986145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.817 ms 00:25:42.316 [2024-12-05 17:12:15.986153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.316 [2024-12-05 17:12:15.986766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.316 [2024-12-05 17:12:15.986804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:42.316 [2024-12-05 17:12:15.986816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:25:42.316 [2024-12-05 17:12:15.986826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.316 [2024-12-05 17:12:16.077512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.316 [2024-12-05 17:12:16.077568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:42.316 [2024-12-05 17:12:16.077588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.621 ms 00:25:42.316 [2024-12-05 17:12:16.077598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.316 [2024-12-05 17:12:16.104906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.316 [2024-12-05 17:12:16.104973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:42.316 [2024-12-05 17:12:16.104990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.211 ms 00:25:42.316 [2024-12-05 17:12:16.104998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.316 [2024-12-05 17:12:16.131082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.316 [2024-12-05 17:12:16.131130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:42.316 [2024-12-05 17:12:16.131145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.029 ms 00:25:42.316 [2024-12-05 17:12:16.131152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.316 [2024-12-05 17:12:16.157253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.316 [2024-12-05 17:12:16.157301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:42.316 [2024-12-05 17:12:16.157315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.046 ms 00:25:42.316 [2024-12-05 17:12:16.157323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.316 [2024-12-05 17:12:16.157376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.316 [2024-12-05 17:12:16.157386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:42.316 [2024-12-05 17:12:16.157400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:42.316 [2024-12-05 17:12:16.157408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.316 [2024-12-05 17:12:16.157499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.316 [2024-12-05 17:12:16.157512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:42.316 [2024-12-05 17:12:16.157523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:42.316 [2024-12-05 17:12:16.157531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.316 [2024-12-05 17:12:16.158846] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 5066.529 ms, result 0 00:25:42.316 { 00:25:42.316 "name": "ftl0", 00:25:42.316 "uuid": "f6e51171-fa49-40ed-b714-ebb2439e8ed1" 00:25:42.316 } 00:25:42.316 17:12:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:25:42.316 17:12:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:42.316 17:12:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:25:42.316 17:12:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:25:42.316 17:12:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:25:42.316 /dev/nbd0 00:25:42.316 17:12:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:25:42.316 17:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:42.316 17:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:25:42.316 17:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:42.316 17:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:42.316 17:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:42.316 17:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:25:42.316 17:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:42.316 17:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:42.316 17:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:25:42.316 1+0 records in 00:25:42.316 1+0 records out 00:25:42.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000443642 s, 9.2 MB/s 00:25:42.316 17:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:42.316 17:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:25:42.316 17:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:42.316 17:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:42.316 17:12:16 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:25:42.316 17:12:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:25:42.576 [2024-12-05 17:12:16.731555] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:25:42.576 [2024-12-05 17:12:16.731698] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80137 ] 00:25:42.576 [2024-12-05 17:12:16.893190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.837 [2024-12-05 17:12:17.011685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:44.223  [2024-12-05T17:12:19.594Z] Copying: 195/1024 [MB] (195 MBps) [2024-12-05T17:12:20.549Z] Copying: 392/1024 [MB] (197 MBps) [2024-12-05T17:12:21.490Z] Copying: 622/1024 [MB] (230 MBps) [2024-12-05T17:12:22.060Z] Copying: 873/1024 [MB] (250 MBps) [2024-12-05T17:12:22.632Z] Copying: 1024/1024 [MB] (average 223 MBps) 00:25:48.265 00:25:48.265 17:12:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:50.811 17:12:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:25:50.811 [2024-12-05 17:12:24.616940] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:25:50.811 [2024-12-05 17:12:24.617045] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80223 ] 00:25:50.811 [2024-12-05 17:12:24.765975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.811 [2024-12-05 17:12:24.842772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:51.742  [2024-12-05T17:12:27.041Z] Copying: 28/1024 [MB] (28 MBps) [2024-12-05T17:12:28.425Z] Copying: 61/1024 [MB] (32 MBps) [2024-12-05T17:12:29.364Z] Copying: 82/1024 [MB] (21 MBps) [2024-12-05T17:12:30.305Z] Copying: 100/1024 [MB] (18 MBps) [2024-12-05T17:12:31.270Z] Copying: 117/1024 [MB] (16 MBps) [2024-12-05T17:12:32.212Z] Copying: 135/1024 [MB] (17 MBps) [2024-12-05T17:12:33.150Z] Copying: 154/1024 [MB] (19 MBps) [2024-12-05T17:12:34.092Z] Copying: 177/1024 [MB] (22 MBps) [2024-12-05T17:12:35.031Z] Copying: 199/1024 [MB] (22 MBps) [2024-12-05T17:12:36.427Z] Copying: 220/1024 [MB] (20 MBps) [2024-12-05T17:12:37.368Z] Copying: 241/1024 [MB] (20 MBps) [2024-12-05T17:12:38.308Z] Copying: 261/1024 [MB] (20 MBps) [2024-12-05T17:12:39.245Z] Copying: 280/1024 [MB] (18 MBps) [2024-12-05T17:12:40.181Z] Copying: 298/1024 [MB] (18 MBps) [2024-12-05T17:12:41.122Z] Copying: 323/1024 [MB] (24 MBps) [2024-12-05T17:12:42.063Z] Copying: 344/1024 [MB] (21 MBps) [2024-12-05T17:12:43.439Z] Copying: 361/1024 [MB] (17 MBps) [2024-12-05T17:12:44.376Z] Copying: 382/1024 [MB] (20 MBps) [2024-12-05T17:12:45.314Z] Copying: 412/1024 [MB] (29 MBps) [2024-12-05T17:12:46.249Z] Copying: 432/1024 [MB] (19 MBps) [2024-12-05T17:12:47.183Z] Copying: 465/1024 [MB] (33 MBps) [2024-12-05T17:12:48.115Z] Copying: 493/1024 [MB] (27 MBps) [2024-12-05T17:12:49.047Z] Copying: 520/1024 [MB] (27 MBps) [2024-12-05T17:12:50.441Z] Copying: 549/1024 [MB] (29 MBps) [2024-12-05T17:12:51.488Z] Copying: 578/1024 [MB] (29 MBps) [2024-12-05T17:12:52.061Z] Copying: 612/1024 [MB] (33 MBps) [2024-12-05T17:12:53.440Z] Copying: 635/1024 [MB] (23 MBps) [2024-12-05T17:12:54.395Z] Copying: 658/1024 [MB] (23 MBps) [2024-12-05T17:12:55.347Z] Copying: 690/1024 [MB] (32 MBps) [2024-12-05T17:12:56.278Z] Copying: 713/1024 [MB] (22 MBps) [2024-12-05T17:12:57.230Z] Copying: 746/1024 [MB] (33 MBps) [2024-12-05T17:12:58.163Z] Copying: 779/1024 [MB] (33 MBps) [2024-12-05T17:12:59.098Z] Copying: 813/1024 [MB] (33 MBps) [2024-12-05T17:13:00.037Z] Copying: 845/1024 [MB] (32 MBps) [2024-12-05T17:13:01.409Z] Copying: 867/1024 [MB] (21 MBps) [2024-12-05T17:13:02.348Z] Copying: 900/1024 [MB] (33 MBps) [2024-12-05T17:13:03.284Z] Copying: 934/1024 [MB] (33 MBps) [2024-12-05T17:13:04.218Z] Copying: 955/1024 [MB] (21 MBps) [2024-12-05T17:13:05.152Z] Copying: 988/1024 [MB] (32 MBps) [2024-12-05T17:13:05.152Z] Copying: 1022/1024 [MB] (33 MBps) [2024-12-05T17:13:05.716Z] Copying: 1024/1024 [MB] (average 25 MBps) 00:26:31.349 00:26:31.349 17:13:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:26:31.349 17:13:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:26:31.606 17:13:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:31.865 [2024-12-05 17:13:06.001105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.865 [2024-12-05 17:13:06.001148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:31.865 [2024-12-05 17:13:06.001160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:31.865 [2024-12-05 17:13:06.001169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.865 [2024-12-05 17:13:06.001190] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:31.865 [2024-12-05 17:13:06.003366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.865 [2024-12-05 17:13:06.003388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:31.865 [2024-12-05 17:13:06.003398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.160 ms 00:26:31.865 [2024-12-05 17:13:06.003410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.865 [2024-12-05 17:13:06.005744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.865 [2024-12-05 17:13:06.005769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:31.865 [2024-12-05 17:13:06.005779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.312 ms 00:26:31.865 [2024-12-05 17:13:06.005786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.865 [2024-12-05 17:13:06.022352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.865 [2024-12-05 17:13:06.022378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:31.865 [2024-12-05 17:13:06.022388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.547 ms 00:26:31.865 [2024-12-05 17:13:06.022395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.865 [2024-12-05 17:13:06.027228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.865 [2024-12-05 17:13:06.027249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:31.865 [2024-12-05 17:13:06.027259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.800 ms 00:26:31.865 [2024-12-05 17:13:06.027268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.865 [2024-12-05 17:13:06.046685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.865 [2024-12-05 17:13:06.046710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:31.865 [2024-12-05 17:13:06.046720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.360 ms 00:26:31.865 [2024-12-05 17:13:06.046727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.865 [2024-12-05 17:13:06.059360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.865 [2024-12-05 17:13:06.059385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:31.865 [2024-12-05 17:13:06.059399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.599 ms 00:26:31.865 [2024-12-05 17:13:06.059405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.865 [2024-12-05 17:13:06.059513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.865 [2024-12-05 17:13:06.059522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:31.865 [2024-12-05 17:13:06.059531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:26:31.865 [2024-12-05 17:13:06.059537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.865 [2024-12-05 17:13:06.077386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.865 [2024-12-05 17:13:06.077410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:31.865 [2024-12-05 17:13:06.077419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.835 ms 00:26:31.865 [2024-12-05 17:13:06.077426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.865 [2024-12-05 17:13:06.094650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.865 [2024-12-05 17:13:06.094672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:31.866 [2024-12-05 17:13:06.094682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.193 ms 00:26:31.866 [2024-12-05 17:13:06.094688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.866 [2024-12-05 17:13:06.111850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.866 [2024-12-05 17:13:06.111874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:31.866 [2024-12-05 17:13:06.111883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.127 ms 00:26:31.866 [2024-12-05 17:13:06.111889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.866 [2024-12-05 17:13:06.128718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.866 [2024-12-05 17:13:06.128747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:31.866 [2024-12-05 17:13:06.128756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.771 ms 00:26:31.866 [2024-12-05 17:13:06.128761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.866 [2024-12-05 17:13:06.128791] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:31.866 [2024-12-05 17:13:06.128803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.128813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.128819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.128827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.128833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.128840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.128846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.128855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.128861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.128868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.128875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.128883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.128889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.128896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.128902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.128909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.128915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.128922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.128928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.128935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.128941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.128965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.128971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.128980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.128986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.128993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.128999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:31.866 [2024-12-05 17:13:06.129541] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:31.866 [2024-12-05 17:13:06.129549] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f6e51171-fa49-40ed-b714-ebb2439e8ed1 00:26:31.866 [2024-12-05 17:13:06.129555] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:31.866 [2024-12-05 17:13:06.129563] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:31.866 [2024-12-05 17:13:06.129570] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:31.866 [2024-12-05 17:13:06.129577] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:31.866 [2024-12-05 17:13:06.129583] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:31.866 [2024-12-05 17:13:06.129591] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:31.866 [2024-12-05 17:13:06.129597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:31.866 [2024-12-05 17:13:06.129603] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:31.866 [2024-12-05 17:13:06.129608] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:31.866 [2024-12-05 17:13:06.129615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.866 [2024-12-05 17:13:06.129621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:31.866 [2024-12-05 17:13:06.129629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.825 ms 00:26:31.866 [2024-12-05 17:13:06.129634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.866 [2024-12-05 17:13:06.139891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.866 [2024-12-05 17:13:06.139917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:31.866 [2024-12-05 17:13:06.139927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.230 ms 00:26:31.866 [2024-12-05 17:13:06.139933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.866 [2024-12-05 17:13:06.140249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:31.866 [2024-12-05 17:13:06.140258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:31.866 [2024-12-05 17:13:06.140266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:26:31.866 [2024-12-05 17:13:06.140272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.866 [2024-12-05 17:13:06.175499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:31.866 [2024-12-05 17:13:06.175652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:31.866 [2024-12-05 17:13:06.175669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:31.866 [2024-12-05 17:13:06.175676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.866 [2024-12-05 17:13:06.175728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:31.866 [2024-12-05 17:13:06.175735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:31.866 [2024-12-05 17:13:06.175743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:31.866 [2024-12-05 17:13:06.175749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.866 [2024-12-05 17:13:06.175807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:31.866 [2024-12-05 17:13:06.175818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:31.866 [2024-12-05 17:13:06.175826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:31.866 [2024-12-05 17:13:06.175833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:31.866 [2024-12-05 17:13:06.175850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:31.866 [2024-12-05 17:13:06.175856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:31.866 [2024-12-05 17:13:06.175863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:31.866 [2024-12-05 17:13:06.175869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.126 [2024-12-05 17:13:06.239564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.126 [2024-12-05 17:13:06.239599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:32.126 [2024-12-05 17:13:06.239610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.126 [2024-12-05 17:13:06.239617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.126 [2024-12-05 17:13:06.291468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.126 [2024-12-05 17:13:06.291501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:32.126 [2024-12-05 17:13:06.291512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.126 [2024-12-05 17:13:06.291519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.126 [2024-12-05 17:13:06.291631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.126 [2024-12-05 17:13:06.291639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:32.126 [2024-12-05 17:13:06.291652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.126 [2024-12-05 17:13:06.291659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.126 [2024-12-05 17:13:06.291702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.126 [2024-12-05 17:13:06.291710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:32.126 [2024-12-05 17:13:06.291718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.126 [2024-12-05 17:13:06.291724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.126 [2024-12-05 17:13:06.291803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.126 [2024-12-05 17:13:06.291811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:32.126 [2024-12-05 17:13:06.291820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.126 [2024-12-05 17:13:06.291829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.126 [2024-12-05 17:13:06.291857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.126 [2024-12-05 17:13:06.291864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:32.126 [2024-12-05 17:13:06.291872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.126 [2024-12-05 17:13:06.291878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.126 [2024-12-05 17:13:06.291914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.126 [2024-12-05 17:13:06.291922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:32.126 [2024-12-05 17:13:06.291930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.126 [2024-12-05 17:13:06.291937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.126 [2024-12-05 17:13:06.291996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:32.126 [2024-12-05 17:13:06.292005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:32.126 [2024-12-05 17:13:06.292013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:32.126 [2024-12-05 17:13:06.292019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:32.126 [2024-12-05 17:13:06.292146] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 290.998 ms, result 0 00:26:32.126 true 00:26:32.126 17:13:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 79983 00:26:32.126 17:13:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid79983 00:26:32.126 17:13:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:26:32.126 [2024-12-05 17:13:06.387629] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:26:32.126 [2024-12-05 17:13:06.387741] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80656 ] 00:26:32.384 [2024-12-05 17:13:06.542913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:32.384 [2024-12-05 17:13:06.636896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:33.759  [2024-12-05T17:13:09.061Z] Copying: 252/1024 [MB] (252 MBps) [2024-12-05T17:13:09.996Z] Copying: 506/1024 [MB] (254 MBps) [2024-12-05T17:13:10.931Z] Copying: 761/1024 [MB] (254 MBps) [2024-12-05T17:13:10.931Z] Copying: 1009/1024 [MB] (248 MBps) [2024-12-05T17:13:11.868Z] Copying: 1024/1024 [MB] (average 252 MBps) 00:26:37.501 00:26:37.501 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 79983 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:26:37.501 17:13:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:37.501 [2024-12-05 17:13:11.584211] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:26:37.501 [2024-12-05 17:13:11.584524] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80719 ] 00:26:37.501 [2024-12-05 17:13:11.741350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:37.501 [2024-12-05 17:13:11.833484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:37.761 [2024-12-05 17:13:12.069002] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:37.761 [2024-12-05 17:13:12.069058] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:38.022 [2024-12-05 17:13:12.133221] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:38.022 [2024-12-05 17:13:12.133714] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:38.022 [2024-12-05 17:13:12.134970] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:38.284 [2024-12-05 17:13:12.503010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.284 [2024-12-05 17:13:12.503071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:38.284 [2024-12-05 17:13:12.503087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:38.284 [2024-12-05 17:13:12.503098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.284 [2024-12-05 17:13:12.503155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.284 [2024-12-05 17:13:12.503165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:38.284 [2024-12-05 17:13:12.503174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:26:38.284 [2024-12-05 17:13:12.503182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.284 [2024-12-05 17:13:12.503204] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:38.284 [2024-12-05 17:13:12.503983] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:38.284 [2024-12-05 17:13:12.504004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.284 [2024-12-05 17:13:12.504013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:38.284 [2024-12-05 17:13:12.504022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.806 ms 00:26:38.284 [2024-12-05 17:13:12.504031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.284 [2024-12-05 17:13:12.505848] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:38.284 [2024-12-05 17:13:12.520495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.284 [2024-12-05 17:13:12.520545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:38.284 [2024-12-05 17:13:12.520560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.648 ms 00:26:38.284 [2024-12-05 17:13:12.520568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.284 [2024-12-05 17:13:12.520652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.284 [2024-12-05 17:13:12.520664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:38.284 [2024-12-05 17:13:12.520673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:38.284 [2024-12-05 17:13:12.520682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.284 [2024-12-05 17:13:12.529256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.284 [2024-12-05 17:13:12.529302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:38.284 [2024-12-05 17:13:12.529314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.464 ms 00:26:38.284 [2024-12-05 17:13:12.529322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.284 [2024-12-05 17:13:12.529406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.284 [2024-12-05 17:13:12.529416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:38.284 [2024-12-05 17:13:12.529425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:26:38.284 [2024-12-05 17:13:12.529433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.284 [2024-12-05 17:13:12.529481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.284 [2024-12-05 17:13:12.529491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:38.284 [2024-12-05 17:13:12.529499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:38.284 [2024-12-05 17:13:12.529507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.284 [2024-12-05 17:13:12.529529] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:38.284 [2024-12-05 17:13:12.533782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.284 [2024-12-05 17:13:12.533825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:38.284 [2024-12-05 17:13:12.533837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.258 ms 00:26:38.284 [2024-12-05 17:13:12.533845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.284 [2024-12-05 17:13:12.533885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.284 [2024-12-05 17:13:12.533893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:38.284 [2024-12-05 17:13:12.533902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:38.284 [2024-12-05 17:13:12.533910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.285 [2024-12-05 17:13:12.533994] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:38.285 [2024-12-05 17:13:12.534020] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:38.285 [2024-12-05 17:13:12.534057] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:38.285 [2024-12-05 17:13:12.534073] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:38.285 [2024-12-05 17:13:12.534179] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:38.285 [2024-12-05 17:13:12.534191] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:38.285 [2024-12-05 17:13:12.534201] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:38.285 [2024-12-05 17:13:12.534214] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:38.285 [2024-12-05 17:13:12.534224] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:38.285 [2024-12-05 17:13:12.534232] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:38.285 [2024-12-05 17:13:12.534240] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:38.285 [2024-12-05 17:13:12.534248] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:38.285 [2024-12-05 17:13:12.534257] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:38.285 [2024-12-05 17:13:12.534265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.285 [2024-12-05 17:13:12.534273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:38.285 [2024-12-05 17:13:12.534283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:26:38.285 [2024-12-05 17:13:12.534291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.285 [2024-12-05 17:13:12.534379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.285 [2024-12-05 17:13:12.534390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:38.285 [2024-12-05 17:13:12.534398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:38.285 [2024-12-05 17:13:12.534405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.285 [2024-12-05 17:13:12.534508] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:38.285 [2024-12-05 17:13:12.534519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:38.285 [2024-12-05 17:13:12.534527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:38.285 [2024-12-05 17:13:12.534535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.285 [2024-12-05 17:13:12.534543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:38.285 [2024-12-05 17:13:12.534550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:38.285 [2024-12-05 17:13:12.534557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:38.285 [2024-12-05 17:13:12.534564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:38.285 [2024-12-05 17:13:12.534571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:38.285 [2024-12-05 17:13:12.534584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:38.285 [2024-12-05 17:13:12.534591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:38.285 [2024-12-05 17:13:12.534598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:38.285 [2024-12-05 17:13:12.534605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:38.285 [2024-12-05 17:13:12.534614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:38.285 [2024-12-05 17:13:12.534621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:38.285 [2024-12-05 17:13:12.534628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.285 [2024-12-05 17:13:12.534636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:38.285 [2024-12-05 17:13:12.534642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:38.285 [2024-12-05 17:13:12.534650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.285 [2024-12-05 17:13:12.534656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:38.285 [2024-12-05 17:13:12.534664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:38.285 [2024-12-05 17:13:12.534671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:38.285 [2024-12-05 17:13:12.534678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:38.285 [2024-12-05 17:13:12.534684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:38.285 [2024-12-05 17:13:12.534691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:38.285 [2024-12-05 17:13:12.534697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:38.285 [2024-12-05 17:13:12.534704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:38.285 [2024-12-05 17:13:12.534710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:38.285 [2024-12-05 17:13:12.534716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:38.285 [2024-12-05 17:13:12.534723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:38.285 [2024-12-05 17:13:12.534730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:38.285 [2024-12-05 17:13:12.534736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:38.285 [2024-12-05 17:13:12.534742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:38.285 [2024-12-05 17:13:12.534749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:38.285 [2024-12-05 17:13:12.534755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:38.285 [2024-12-05 17:13:12.534761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:38.285 [2024-12-05 17:13:12.534768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:38.285 [2024-12-05 17:13:12.534774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:38.285 [2024-12-05 17:13:12.534782] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:38.285 [2024-12-05 17:13:12.534788] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.285 [2024-12-05 17:13:12.534795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:38.285 [2024-12-05 17:13:12.534801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:38.285 [2024-12-05 17:13:12.534808] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.285 [2024-12-05 17:13:12.534814] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:38.285 [2024-12-05 17:13:12.534821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:38.285 [2024-12-05 17:13:12.534833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:38.285 [2024-12-05 17:13:12.534842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:38.285 [2024-12-05 17:13:12.534850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:38.285 [2024-12-05 17:13:12.534858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:38.285 [2024-12-05 17:13:12.534864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:38.285 [2024-12-05 17:13:12.534871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:38.285 [2024-12-05 17:13:12.534878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:38.285 [2024-12-05 17:13:12.534885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:38.285 [2024-12-05 17:13:12.534894] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:38.285 [2024-12-05 17:13:12.534904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:38.285 [2024-12-05 17:13:12.534912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:38.285 [2024-12-05 17:13:12.534920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:38.285 [2024-12-05 17:13:12.534927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:38.285 [2024-12-05 17:13:12.534934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:38.285 [2024-12-05 17:13:12.534942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:38.285 [2024-12-05 17:13:12.534963] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:38.285 [2024-12-05 17:13:12.534971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:38.285 [2024-12-05 17:13:12.534979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:38.285 [2024-12-05 17:13:12.534987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:38.285 [2024-12-05 17:13:12.534994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:38.285 [2024-12-05 17:13:12.535001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:38.285 [2024-12-05 17:13:12.535009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:38.285 [2024-12-05 17:13:12.535016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:38.285 [2024-12-05 17:13:12.535023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:38.285 [2024-12-05 17:13:12.535031] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:38.285 [2024-12-05 17:13:12.535039] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:38.286 [2024-12-05 17:13:12.535048] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:38.286 [2024-12-05 17:13:12.535055] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:38.286 [2024-12-05 17:13:12.535064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:38.286 [2024-12-05 17:13:12.535071] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:38.286 [2024-12-05 17:13:12.535078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.286 [2024-12-05 17:13:12.535086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:38.286 [2024-12-05 17:13:12.535096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.640 ms 00:26:38.286 [2024-12-05 17:13:12.535104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.286 [2024-12-05 17:13:12.567367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.286 [2024-12-05 17:13:12.567419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:38.286 [2024-12-05 17:13:12.567431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.215 ms 00:26:38.286 [2024-12-05 17:13:12.567440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.286 [2024-12-05 17:13:12.567535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.286 [2024-12-05 17:13:12.567545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:38.286 [2024-12-05 17:13:12.567554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:26:38.286 [2024-12-05 17:13:12.567562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.286 [2024-12-05 17:13:12.614088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.286 [2024-12-05 17:13:12.614146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:38.286 [2024-12-05 17:13:12.614165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.462 ms 00:26:38.286 [2024-12-05 17:13:12.614174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.286 [2024-12-05 17:13:12.614231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.286 [2024-12-05 17:13:12.614242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:38.286 [2024-12-05 17:13:12.614251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:38.286 [2024-12-05 17:13:12.614260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.286 [2024-12-05 17:13:12.614901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.286 [2024-12-05 17:13:12.614927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:38.286 [2024-12-05 17:13:12.614939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:26:38.286 [2024-12-05 17:13:12.614994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.286 [2024-12-05 17:13:12.615171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.286 [2024-12-05 17:13:12.615192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:38.286 [2024-12-05 17:13:12.615202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:26:38.286 [2024-12-05 17:13:12.615211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.286 [2024-12-05 17:13:12.630712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.286 [2024-12-05 17:13:12.630878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:38.286 [2024-12-05 17:13:12.630895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.481 ms 00:26:38.286 [2024-12-05 17:13:12.630903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.286 [2024-12-05 17:13:12.644062] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:38.286 [2024-12-05 17:13:12.644216] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:38.286 [2024-12-05 17:13:12.644280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.286 [2024-12-05 17:13:12.644302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:38.286 [2024-12-05 17:13:12.644324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.266 ms 00:26:38.286 [2024-12-05 17:13:12.644343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.548 [2024-12-05 17:13:12.668904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.548 [2024-12-05 17:13:12.669051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:38.548 [2024-12-05 17:13:12.669107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.515 ms 00:26:38.548 [2024-12-05 17:13:12.669132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.548 [2024-12-05 17:13:12.680973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.548 [2024-12-05 17:13:12.681085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:38.548 [2024-12-05 17:13:12.681132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.791 ms 00:26:38.548 [2024-12-05 17:13:12.681154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.548 [2024-12-05 17:13:12.692864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.548 [2024-12-05 17:13:12.693014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:38.548 [2024-12-05 17:13:12.693072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.458 ms 00:26:38.548 [2024-12-05 17:13:12.693097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.548 [2024-12-05 17:13:12.693751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.548 [2024-12-05 17:13:12.693852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:38.548 [2024-12-05 17:13:12.693912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:26:38.548 [2024-12-05 17:13:12.693935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.548 [2024-12-05 17:13:12.751272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.548 [2024-12-05 17:13:12.751472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:38.548 [2024-12-05 17:13:12.751527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.287 ms 00:26:38.548 [2024-12-05 17:13:12.751550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.548 [2024-12-05 17:13:12.762306] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:38.548 [2024-12-05 17:13:12.765077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.548 [2024-12-05 17:13:12.765183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:38.548 [2024-12-05 17:13:12.765231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.418 ms 00:26:38.548 [2024-12-05 17:13:12.765259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.548 [2024-12-05 17:13:12.765374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.548 [2024-12-05 17:13:12.765400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:38.548 [2024-12-05 17:13:12.765420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:38.548 [2024-12-05 17:13:12.765487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.548 [2024-12-05 17:13:12.765577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.548 [2024-12-05 17:13:12.765608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:38.548 [2024-12-05 17:13:12.765628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:38.548 [2024-12-05 17:13:12.765647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.548 [2024-12-05 17:13:12.765716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.548 [2024-12-05 17:13:12.765739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:38.548 [2024-12-05 17:13:12.765759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:38.548 [2024-12-05 17:13:12.765783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.548 [2024-12-05 17:13:12.765825] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:38.548 [2024-12-05 17:13:12.765877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.548 [2024-12-05 17:13:12.765900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:38.548 [2024-12-05 17:13:12.765919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:26:38.548 [2024-12-05 17:13:12.765942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.548 [2024-12-05 17:13:12.789730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.548 [2024-12-05 17:13:12.789867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:38.548 [2024-12-05 17:13:12.789919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.733 ms 00:26:38.548 [2024-12-05 17:13:12.789941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.548 [2024-12-05 17:13:12.790091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:38.548 [2024-12-05 17:13:12.790140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:38.548 [2024-12-05 17:13:12.790162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:26:38.548 [2024-12-05 17:13:12.790180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:38.548 [2024-12-05 17:13:12.791219] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 287.779 ms, result 0 00:26:39.492  [2024-12-05T17:13:14.805Z] Copying: 10/1024 [MB] (10 MBps) [2024-12-05T17:13:16.191Z] Copying: 28/1024 [MB] (17 MBps) [2024-12-05T17:13:17.134Z] Copying: 41/1024 [MB] (13 MBps) [2024-12-05T17:13:18.077Z] Copying: 60/1024 [MB] (19 MBps) [2024-12-05T17:13:19.021Z] Copying: 78/1024 [MB] (18 MBps) [2024-12-05T17:13:19.961Z] Copying: 101/1024 [MB] (23 MBps) [2024-12-05T17:13:20.898Z] Copying: 122/1024 [MB] (20 MBps) [2024-12-05T17:13:21.887Z] Copying: 141/1024 [MB] (19 MBps) [2024-12-05T17:13:22.900Z] Copying: 156/1024 [MB] (15 MBps) [2024-12-05T17:13:23.842Z] Copying: 176/1024 [MB] (19 MBps) [2024-12-05T17:13:25.234Z] Copying: 208/1024 [MB] (32 MBps) [2024-12-05T17:13:25.805Z] Copying: 242/1024 [MB] (33 MBps) [2024-12-05T17:13:27.193Z] Copying: 264/1024 [MB] (21 MBps) [2024-12-05T17:13:28.137Z] Copying: 276/1024 [MB] (12 MBps) [2024-12-05T17:13:29.082Z] Copying: 297/1024 [MB] (20 MBps) [2024-12-05T17:13:30.027Z] Copying: 309/1024 [MB] (12 MBps) [2024-12-05T17:13:30.969Z] Copying: 329/1024 [MB] (19 MBps) [2024-12-05T17:13:31.916Z] Copying: 341/1024 [MB] (12 MBps) [2024-12-05T17:13:32.859Z] Copying: 353/1024 [MB] (11 MBps) [2024-12-05T17:13:33.804Z] Copying: 365/1024 [MB] (11 MBps) [2024-12-05T17:13:35.213Z] Copying: 375/1024 [MB] (10 MBps) [2024-12-05T17:13:36.155Z] Copying: 386/1024 [MB] (10 MBps) [2024-12-05T17:13:37.095Z] Copying: 398/1024 [MB] (12 MBps) [2024-12-05T17:13:38.038Z] Copying: 427/1024 [MB] (28 MBps) [2024-12-05T17:13:38.981Z] Copying: 448/1024 [MB] (21 MBps) [2024-12-05T17:13:39.928Z] Copying: 479/1024 [MB] (30 MBps) [2024-12-05T17:13:40.873Z] Copying: 496/1024 [MB] (16 MBps) [2024-12-05T17:13:41.816Z] Copying: 515/1024 [MB] (18 MBps) [2024-12-05T17:13:43.205Z] Copying: 544/1024 [MB] (29 MBps) [2024-12-05T17:13:44.149Z] Copying: 561/1024 [MB] (17 MBps) [2024-12-05T17:13:45.092Z] Copying: 581/1024 [MB] (19 MBps) [2024-12-05T17:13:46.034Z] Copying: 593/1024 [MB] (11 MBps) [2024-12-05T17:13:46.974Z] Copying: 611/1024 [MB] (17 MBps) [2024-12-05T17:13:47.915Z] Copying: 629/1024 [MB] (17 MBps) [2024-12-05T17:13:48.855Z] Copying: 648/1024 [MB] (19 MBps) [2024-12-05T17:13:50.239Z] Copying: 666/1024 [MB] (18 MBps) [2024-12-05T17:13:50.811Z] Copying: 688/1024 [MB] (21 MBps) [2024-12-05T17:13:52.201Z] Copying: 711/1024 [MB] (22 MBps) [2024-12-05T17:13:53.145Z] Copying: 725/1024 [MB] (14 MBps) [2024-12-05T17:13:54.171Z] Copying: 737/1024 [MB] (12 MBps) [2024-12-05T17:13:55.133Z] Copying: 749/1024 [MB] (11 MBps) [2024-12-05T17:13:56.075Z] Copying: 760/1024 [MB] (11 MBps) [2024-12-05T17:13:57.016Z] Copying: 780/1024 [MB] (19 MBps) [2024-12-05T17:13:57.960Z] Copying: 798/1024 [MB] (18 MBps) [2024-12-05T17:13:58.903Z] Copying: 815/1024 [MB] (17 MBps) [2024-12-05T17:13:59.851Z] Copying: 829/1024 [MB] (13 MBps) [2024-12-05T17:14:01.236Z] Copying: 842/1024 [MB] (13 MBps) [2024-12-05T17:14:01.808Z] Copying: 863/1024 [MB] (20 MBps) [2024-12-05T17:14:03.195Z] Copying: 879/1024 [MB] (15 MBps) [2024-12-05T17:14:04.140Z] Copying: 894/1024 [MB] (15 MBps) [2024-12-05T17:14:05.084Z] Copying: 913/1024 [MB] (19 MBps) [2024-12-05T17:14:06.024Z] Copying: 931/1024 [MB] (17 MBps) [2024-12-05T17:14:06.965Z] Copying: 960/1024 [MB] (28 MBps) [2024-12-05T17:14:07.908Z] Copying: 993/1024 [MB] (33 MBps) [2024-12-05T17:14:07.908Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-05 17:14:07.754484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.541 [2024-12-05 17:14:07.754520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:33.541 [2024-12-05 17:14:07.754531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:27:33.541 [2024-12-05 17:14:07.754542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.541 [2024-12-05 17:14:07.754558] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:33.541 [2024-12-05 17:14:07.756676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.541 [2024-12-05 17:14:07.756803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:33.541 [2024-12-05 17:14:07.756817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.106 ms 00:27:33.541 [2024-12-05 17:14:07.756823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.541 [2024-12-05 17:14:07.758282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.541 [2024-12-05 17:14:07.758308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:33.541 [2024-12-05 17:14:07.758316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.441 ms 00:27:33.541 [2024-12-05 17:14:07.758322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.541 [2024-12-05 17:14:07.770303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.541 [2024-12-05 17:14:07.770329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:33.541 [2024-12-05 17:14:07.770337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.965 ms 00:27:33.541 [2024-12-05 17:14:07.770343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.541 [2024-12-05 17:14:07.775132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.541 [2024-12-05 17:14:07.775227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:33.541 [2024-12-05 17:14:07.775238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.771 ms 00:27:33.541 [2024-12-05 17:14:07.775244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.541 [2024-12-05 17:14:07.793485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.541 [2024-12-05 17:14:07.793512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:33.542 [2024-12-05 17:14:07.793520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.197 ms 00:27:33.542 [2024-12-05 17:14:07.793526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.542 [2024-12-05 17:14:07.805173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.542 [2024-12-05 17:14:07.805280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:33.542 [2024-12-05 17:14:07.805292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.622 ms 00:27:33.542 [2024-12-05 17:14:07.805298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.542 [2024-12-05 17:14:07.806416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.542 [2024-12-05 17:14:07.806439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:33.542 [2024-12-05 17:14:07.806446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.092 ms 00:27:33.542 [2024-12-05 17:14:07.806451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.542 [2024-12-05 17:14:07.824293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.542 [2024-12-05 17:14:07.824390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:33.542 [2024-12-05 17:14:07.824401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.831 ms 00:27:33.542 [2024-12-05 17:14:07.824414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.542 [2024-12-05 17:14:07.841947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.542 [2024-12-05 17:14:07.841976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:33.542 [2024-12-05 17:14:07.841983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.511 ms 00:27:33.542 [2024-12-05 17:14:07.841988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.542 [2024-12-05 17:14:07.858869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.542 [2024-12-05 17:14:07.858892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:33.542 [2024-12-05 17:14:07.858900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.857 ms 00:27:33.542 [2024-12-05 17:14:07.858905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.542 [2024-12-05 17:14:07.875953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.542 [2024-12-05 17:14:07.875975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:33.542 [2024-12-05 17:14:07.875983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.980 ms 00:27:33.542 [2024-12-05 17:14:07.875988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.542 [2024-12-05 17:14:07.876012] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:33.542 [2024-12-05 17:14:07.876022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 768 / 261120 wr_cnt: 1 state: open 00:27:33.542 [2024-12-05 17:14:07.876029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:33.542 [2024-12-05 17:14:07.876507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:33.543 [2024-12-05 17:14:07.876714] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:33.543 [2024-12-05 17:14:07.876720] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f6e51171-fa49-40ed-b714-ebb2439e8ed1 00:27:33.543 [2024-12-05 17:14:07.876732] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 768 00:27:33.543 [2024-12-05 17:14:07.876738] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1728 00:27:33.543 [2024-12-05 17:14:07.876743] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 768 00:27:33.543 [2024-12-05 17:14:07.876749] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 2.2500 00:27:33.543 [2024-12-05 17:14:07.876754] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:33.543 [2024-12-05 17:14:07.876762] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:33.543 [2024-12-05 17:14:07.876767] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:33.543 [2024-12-05 17:14:07.876772] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:33.543 [2024-12-05 17:14:07.876777] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:33.543 [2024-12-05 17:14:07.876782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.543 [2024-12-05 17:14:07.876788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:33.543 [2024-12-05 17:14:07.876793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.771 ms 00:27:33.543 [2024-12-05 17:14:07.876799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.543 [2024-12-05 17:14:07.886102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.543 [2024-12-05 17:14:07.886192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:33.543 [2024-12-05 17:14:07.886204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.292 ms 00:27:33.543 [2024-12-05 17:14:07.886213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.543 [2024-12-05 17:14:07.886475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.543 [2024-12-05 17:14:07.886481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:33.543 [2024-12-05 17:14:07.886487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:27:33.543 [2024-12-05 17:14:07.886493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.804 [2024-12-05 17:14:07.911915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.804 [2024-12-05 17:14:07.911942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:33.804 [2024-12-05 17:14:07.911960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.804 [2024-12-05 17:14:07.911967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.804 [2024-12-05 17:14:07.912007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.804 [2024-12-05 17:14:07.912013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:33.804 [2024-12-05 17:14:07.912019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.804 [2024-12-05 17:14:07.912024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.804 [2024-12-05 17:14:07.912065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.804 [2024-12-05 17:14:07.912072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:33.804 [2024-12-05 17:14:07.912080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.804 [2024-12-05 17:14:07.912085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.804 [2024-12-05 17:14:07.912096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.804 [2024-12-05 17:14:07.912102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:33.804 [2024-12-05 17:14:07.912108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.805 [2024-12-05 17:14:07.912114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.805 [2024-12-05 17:14:07.970497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.805 [2024-12-05 17:14:07.970530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:33.805 [2024-12-05 17:14:07.970541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.805 [2024-12-05 17:14:07.970547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.805 [2024-12-05 17:14:08.018740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.805 [2024-12-05 17:14:08.018771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:33.805 [2024-12-05 17:14:08.018779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.805 [2024-12-05 17:14:08.018786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.805 [2024-12-05 17:14:08.018836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.805 [2024-12-05 17:14:08.018844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:33.805 [2024-12-05 17:14:08.018850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.805 [2024-12-05 17:14:08.018856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.805 [2024-12-05 17:14:08.018887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.805 [2024-12-05 17:14:08.018893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:33.805 [2024-12-05 17:14:08.018899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.805 [2024-12-05 17:14:08.018905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.805 [2024-12-05 17:14:08.018984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.805 [2024-12-05 17:14:08.018993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:33.805 [2024-12-05 17:14:08.018999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.805 [2024-12-05 17:14:08.019006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.805 [2024-12-05 17:14:08.019029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.805 [2024-12-05 17:14:08.019037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:33.805 [2024-12-05 17:14:08.019043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.805 [2024-12-05 17:14:08.019049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.805 [2024-12-05 17:14:08.019076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.805 [2024-12-05 17:14:08.019082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:33.805 [2024-12-05 17:14:08.019088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.805 [2024-12-05 17:14:08.019093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.805 [2024-12-05 17:14:08.019126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:33.805 [2024-12-05 17:14:08.019133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:33.805 [2024-12-05 17:14:08.019139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:33.805 [2024-12-05 17:14:08.019145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.805 [2024-12-05 17:14:08.019233] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 264.726 ms, result 0 00:27:34.747 00:27:34.747 00:27:34.747 17:14:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:36.665 17:14:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:36.665 [2024-12-05 17:14:10.981494] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:27:36.665 [2024-12-05 17:14:10.981943] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81325 ] 00:27:36.925 [2024-12-05 17:14:11.138635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.925 [2024-12-05 17:14:11.218880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:37.186 [2024-12-05 17:14:11.429357] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:37.186 [2024-12-05 17:14:11.429403] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:37.449 [2024-12-05 17:14:11.576662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.449 [2024-12-05 17:14:11.576799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:37.449 [2024-12-05 17:14:11.576814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:37.449 [2024-12-05 17:14:11.576821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.449 [2024-12-05 17:14:11.576864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.449 [2024-12-05 17:14:11.576874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:37.449 [2024-12-05 17:14:11.576880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:37.449 [2024-12-05 17:14:11.576886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.449 [2024-12-05 17:14:11.576900] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:37.449 [2024-12-05 17:14:11.577450] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:37.449 [2024-12-05 17:14:11.577463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.449 [2024-12-05 17:14:11.577469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:37.449 [2024-12-05 17:14:11.577475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:27:37.449 [2024-12-05 17:14:11.577481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.449 [2024-12-05 17:14:11.578432] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:37.449 [2024-12-05 17:14:11.587978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.449 [2024-12-05 17:14:11.588004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:37.449 [2024-12-05 17:14:11.588012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.548 ms 00:27:37.449 [2024-12-05 17:14:11.588018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.449 [2024-12-05 17:14:11.588061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.449 [2024-12-05 17:14:11.588069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:37.449 [2024-12-05 17:14:11.588076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:37.449 [2024-12-05 17:14:11.588081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.449 [2024-12-05 17:14:11.592337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.449 [2024-12-05 17:14:11.592361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:37.449 [2024-12-05 17:14:11.592369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.219 ms 00:27:37.449 [2024-12-05 17:14:11.592378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.449 [2024-12-05 17:14:11.592429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.449 [2024-12-05 17:14:11.592436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:37.449 [2024-12-05 17:14:11.592442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:27:37.449 [2024-12-05 17:14:11.592448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.449 [2024-12-05 17:14:11.592488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.449 [2024-12-05 17:14:11.592496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:37.449 [2024-12-05 17:14:11.592502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:37.449 [2024-12-05 17:14:11.592507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.449 [2024-12-05 17:14:11.592523] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:37.449 [2024-12-05 17:14:11.595203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.449 [2024-12-05 17:14:11.595319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:37.449 [2024-12-05 17:14:11.595335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.684 ms 00:27:37.449 [2024-12-05 17:14:11.595341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.449 [2024-12-05 17:14:11.595369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.449 [2024-12-05 17:14:11.595376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:37.449 [2024-12-05 17:14:11.595383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:37.449 [2024-12-05 17:14:11.595388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.449 [2024-12-05 17:14:11.595402] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:37.450 [2024-12-05 17:14:11.595417] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:37.450 [2024-12-05 17:14:11.595444] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:37.450 [2024-12-05 17:14:11.595458] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:37.450 [2024-12-05 17:14:11.595540] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:37.450 [2024-12-05 17:14:11.595547] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:37.450 [2024-12-05 17:14:11.595555] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:37.450 [2024-12-05 17:14:11.595563] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:37.450 [2024-12-05 17:14:11.595569] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:37.450 [2024-12-05 17:14:11.595575] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:37.450 [2024-12-05 17:14:11.595581] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:37.450 [2024-12-05 17:14:11.595588] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:37.450 [2024-12-05 17:14:11.595594] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:37.450 [2024-12-05 17:14:11.595600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.450 [2024-12-05 17:14:11.595605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:37.450 [2024-12-05 17:14:11.595611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:27:37.450 [2024-12-05 17:14:11.595617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.450 [2024-12-05 17:14:11.595681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.450 [2024-12-05 17:14:11.595687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:37.450 [2024-12-05 17:14:11.595693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:27:37.450 [2024-12-05 17:14:11.595698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.450 [2024-12-05 17:14:11.595777] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:37.450 [2024-12-05 17:14:11.595784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:37.450 [2024-12-05 17:14:11.595791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:37.450 [2024-12-05 17:14:11.595796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:37.450 [2024-12-05 17:14:11.595802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:37.450 [2024-12-05 17:14:11.595807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:37.450 [2024-12-05 17:14:11.595813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:37.450 [2024-12-05 17:14:11.595818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:37.450 [2024-12-05 17:14:11.595826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:37.450 [2024-12-05 17:14:11.595832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:37.450 [2024-12-05 17:14:11.595837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:37.450 [2024-12-05 17:14:11.595842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:37.450 [2024-12-05 17:14:11.595848] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:37.450 [2024-12-05 17:14:11.595857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:37.450 [2024-12-05 17:14:11.595862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:37.450 [2024-12-05 17:14:11.595867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:37.450 [2024-12-05 17:14:11.595872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:37.450 [2024-12-05 17:14:11.595877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:37.450 [2024-12-05 17:14:11.595883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:37.450 [2024-12-05 17:14:11.595888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:37.450 [2024-12-05 17:14:11.595893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:37.450 [2024-12-05 17:14:11.595898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:37.450 [2024-12-05 17:14:11.595903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:37.450 [2024-12-05 17:14:11.595908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:37.450 [2024-12-05 17:14:11.595913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:37.450 [2024-12-05 17:14:11.595918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:37.450 [2024-12-05 17:14:11.595923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:37.450 [2024-12-05 17:14:11.595928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:37.450 [2024-12-05 17:14:11.595933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:37.450 [2024-12-05 17:14:11.595938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:37.450 [2024-12-05 17:14:11.595943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:37.450 [2024-12-05 17:14:11.595962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:37.450 [2024-12-05 17:14:11.595968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:37.450 [2024-12-05 17:14:11.595974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:37.450 [2024-12-05 17:14:11.595979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:37.450 [2024-12-05 17:14:11.595984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:37.450 [2024-12-05 17:14:11.595989] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:37.450 [2024-12-05 17:14:11.595994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:37.450 [2024-12-05 17:14:11.595999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:37.450 [2024-12-05 17:14:11.596004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:37.450 [2024-12-05 17:14:11.596011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:37.450 [2024-12-05 17:14:11.596016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:37.450 [2024-12-05 17:14:11.596021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:37.450 [2024-12-05 17:14:11.596027] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:37.450 [2024-12-05 17:14:11.596033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:37.450 [2024-12-05 17:14:11.596039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:37.450 [2024-12-05 17:14:11.596044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:37.450 [2024-12-05 17:14:11.596050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:37.450 [2024-12-05 17:14:11.596055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:37.450 [2024-12-05 17:14:11.596060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:37.450 [2024-12-05 17:14:11.596065] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:37.450 [2024-12-05 17:14:11.596071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:37.450 [2024-12-05 17:14:11.596076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:37.450 [2024-12-05 17:14:11.596082] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:37.450 [2024-12-05 17:14:11.596089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:37.450 [2024-12-05 17:14:11.596097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:37.450 [2024-12-05 17:14:11.596103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:37.450 [2024-12-05 17:14:11.596108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:37.450 [2024-12-05 17:14:11.596114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:37.450 [2024-12-05 17:14:11.596119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:37.450 [2024-12-05 17:14:11.596125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:37.450 [2024-12-05 17:14:11.596130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:37.450 [2024-12-05 17:14:11.596135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:37.450 [2024-12-05 17:14:11.596140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:37.450 [2024-12-05 17:14:11.596146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:37.450 [2024-12-05 17:14:11.596151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:37.450 [2024-12-05 17:14:11.596156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:37.450 [2024-12-05 17:14:11.596162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:37.450 [2024-12-05 17:14:11.596167] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:37.450 [2024-12-05 17:14:11.596172] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:37.450 [2024-12-05 17:14:11.596178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:37.450 [2024-12-05 17:14:11.596185] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:37.450 [2024-12-05 17:14:11.596192] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:37.450 [2024-12-05 17:14:11.596199] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:37.450 [2024-12-05 17:14:11.596205] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:37.450 [2024-12-05 17:14:11.596210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.450 [2024-12-05 17:14:11.596216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:37.450 [2024-12-05 17:14:11.596222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.488 ms 00:27:37.450 [2024-12-05 17:14:11.596227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.451 [2024-12-05 17:14:11.617373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.451 [2024-12-05 17:14:11.617400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:37.451 [2024-12-05 17:14:11.617408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.112 ms 00:27:37.451 [2024-12-05 17:14:11.617416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.451 [2024-12-05 17:14:11.617481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.451 [2024-12-05 17:14:11.617487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:37.451 [2024-12-05 17:14:11.617494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:27:37.451 [2024-12-05 17:14:11.617499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.451 [2024-12-05 17:14:11.659841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.451 [2024-12-05 17:14:11.659872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:37.451 [2024-12-05 17:14:11.659881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.301 ms 00:27:37.451 [2024-12-05 17:14:11.659887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.451 [2024-12-05 17:14:11.659918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.451 [2024-12-05 17:14:11.659925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:37.451 [2024-12-05 17:14:11.659934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:27:37.451 [2024-12-05 17:14:11.659940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.451 [2024-12-05 17:14:11.660251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.451 [2024-12-05 17:14:11.660271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:37.451 [2024-12-05 17:14:11.660279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:27:37.451 [2024-12-05 17:14:11.660285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.451 [2024-12-05 17:14:11.660389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.451 [2024-12-05 17:14:11.660404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:37.451 [2024-12-05 17:14:11.660411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:27:37.451 [2024-12-05 17:14:11.660421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.451 [2024-12-05 17:14:11.670798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.451 [2024-12-05 17:14:11.670908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:37.451 [2024-12-05 17:14:11.670924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.362 ms 00:27:37.451 [2024-12-05 17:14:11.670930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.451 [2024-12-05 17:14:11.680726] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:27:37.451 [2024-12-05 17:14:11.680823] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:37.451 [2024-12-05 17:14:11.680872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.451 [2024-12-05 17:14:11.680888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:37.451 [2024-12-05 17:14:11.680904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.838 ms 00:27:37.451 [2024-12-05 17:14:11.680918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.451 [2024-12-05 17:14:11.699472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.451 [2024-12-05 17:14:11.699563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:37.451 [2024-12-05 17:14:11.699603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.445 ms 00:27:37.451 [2024-12-05 17:14:11.699620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.451 [2024-12-05 17:14:11.708573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.451 [2024-12-05 17:14:11.708656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:37.451 [2024-12-05 17:14:11.708720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.920 ms 00:27:37.451 [2024-12-05 17:14:11.708740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.451 [2024-12-05 17:14:11.717425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.451 [2024-12-05 17:14:11.717509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:37.451 [2024-12-05 17:14:11.717547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.653 ms 00:27:37.451 [2024-12-05 17:14:11.717563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.451 [2024-12-05 17:14:11.718026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.451 [2024-12-05 17:14:11.718095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:37.451 [2024-12-05 17:14:11.718136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:27:37.451 [2024-12-05 17:14:11.718152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.451 [2024-12-05 17:14:11.762781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.451 [2024-12-05 17:14:11.762890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:37.451 [2024-12-05 17:14:11.762936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.605 ms 00:27:37.451 [2024-12-05 17:14:11.762966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.451 [2024-12-05 17:14:11.770674] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:37.451 [2024-12-05 17:14:11.772610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.451 [2024-12-05 17:14:11.772688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:37.451 [2024-12-05 17:14:11.772907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.605 ms 00:27:37.451 [2024-12-05 17:14:11.772927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.451 [2024-12-05 17:14:11.773007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.451 [2024-12-05 17:14:11.773078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:37.451 [2024-12-05 17:14:11.773128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:37.451 [2024-12-05 17:14:11.773143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.451 [2024-12-05 17:14:11.773621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.451 [2024-12-05 17:14:11.773701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:37.451 [2024-12-05 17:14:11.773738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:27:37.451 [2024-12-05 17:14:11.773755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.451 [2024-12-05 17:14:11.773796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.451 [2024-12-05 17:14:11.773840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:37.451 [2024-12-05 17:14:11.773858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:37.451 [2024-12-05 17:14:11.773873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.451 [2024-12-05 17:14:11.773931] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:37.451 [2024-12-05 17:14:11.773967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.451 [2024-12-05 17:14:11.773983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:37.451 [2024-12-05 17:14:11.774024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:27:37.451 [2024-12-05 17:14:11.774041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.451 [2024-12-05 17:14:11.791816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.451 [2024-12-05 17:14:11.791903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:37.451 [2024-12-05 17:14:11.791946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.748 ms 00:27:37.451 [2024-12-05 17:14:11.791971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.451 [2024-12-05 17:14:11.792028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:37.451 [2024-12-05 17:14:11.792046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:37.451 [2024-12-05 17:14:11.792061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:37.451 [2024-12-05 17:14:11.792076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:37.451 [2024-12-05 17:14:11.792964] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 215.950 ms, result 0 00:27:38.837  [2024-12-05T17:14:14.147Z] Copying: 1328/1048576 [kB] (1328 kBps) [2024-12-05T17:14:15.089Z] Copying: 3256/1048576 [kB] (1928 kBps) [2024-12-05T17:14:16.028Z] Copying: 14/1024 [MB] (11 MBps) [2024-12-05T17:14:16.968Z] Copying: 31/1024 [MB] (16 MBps) [2024-12-05T17:14:17.955Z] Copying: 47/1024 [MB] (16 MBps) [2024-12-05T17:14:19.337Z] Copying: 63/1024 [MB] (16 MBps) [2024-12-05T17:14:20.283Z] Copying: 79/1024 [MB] (16 MBps) [2024-12-05T17:14:21.226Z] Copying: 96/1024 [MB] (16 MBps) [2024-12-05T17:14:22.171Z] Copying: 117/1024 [MB] (20 MBps) [2024-12-05T17:14:23.116Z] Copying: 137/1024 [MB] (20 MBps) [2024-12-05T17:14:24.060Z] Copying: 168/1024 [MB] (30 MBps) [2024-12-05T17:14:25.006Z] Copying: 193/1024 [MB] (25 MBps) [2024-12-05T17:14:26.040Z] Copying: 210/1024 [MB] (16 MBps) [2024-12-05T17:14:26.980Z] Copying: 235/1024 [MB] (24 MBps) [2024-12-05T17:14:28.367Z] Copying: 259/1024 [MB] (23 MBps) [2024-12-05T17:14:28.941Z] Copying: 289/1024 [MB] (30 MBps) [2024-12-05T17:14:30.331Z] Copying: 310/1024 [MB] (20 MBps) [2024-12-05T17:14:31.279Z] Copying: 337/1024 [MB] (27 MBps) [2024-12-05T17:14:32.219Z] Copying: 358/1024 [MB] (21 MBps) [2024-12-05T17:14:33.159Z] Copying: 382/1024 [MB] (23 MBps) [2024-12-05T17:14:34.101Z] Copying: 404/1024 [MB] (21 MBps) [2024-12-05T17:14:35.044Z] Copying: 445/1024 [MB] (40 MBps) [2024-12-05T17:14:35.987Z] Copying: 488/1024 [MB] (43 MBps) [2024-12-05T17:14:37.374Z] Copying: 516/1024 [MB] (27 MBps) [2024-12-05T17:14:37.947Z] Copying: 534/1024 [MB] (18 MBps) [2024-12-05T17:14:39.334Z] Copying: 555/1024 [MB] (20 MBps) [2024-12-05T17:14:40.280Z] Copying: 586/1024 [MB] (31 MBps) [2024-12-05T17:14:41.225Z] Copying: 610/1024 [MB] (23 MBps) [2024-12-05T17:14:42.168Z] Copying: 639/1024 [MB] (29 MBps) [2024-12-05T17:14:43.114Z] Copying: 668/1024 [MB] (28 MBps) [2024-12-05T17:14:44.059Z] Copying: 701/1024 [MB] (33 MBps) [2024-12-05T17:14:44.999Z] Copying: 720/1024 [MB] (19 MBps) [2024-12-05T17:14:45.962Z] Copying: 747/1024 [MB] (27 MBps) [2024-12-05T17:14:47.346Z] Copying: 771/1024 [MB] (23 MBps) [2024-12-05T17:14:48.288Z] Copying: 798/1024 [MB] (26 MBps) [2024-12-05T17:14:49.234Z] Copying: 826/1024 [MB] (28 MBps) [2024-12-05T17:14:50.178Z] Copying: 853/1024 [MB] (27 MBps) [2024-12-05T17:14:51.123Z] Copying: 871/1024 [MB] (18 MBps) [2024-12-05T17:14:52.066Z] Copying: 901/1024 [MB] (29 MBps) [2024-12-05T17:14:53.010Z] Copying: 927/1024 [MB] (26 MBps) [2024-12-05T17:14:53.953Z] Copying: 959/1024 [MB] (31 MBps) [2024-12-05T17:14:55.337Z] Copying: 986/1024 [MB] (27 MBps) [2024-12-05T17:14:55.337Z] Copying: 1016/1024 [MB] (30 MBps) [2024-12-05T17:14:55.337Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-05 17:14:55.247998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.970 [2024-12-05 17:14:55.248065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:20.970 [2024-12-05 17:14:55.248082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:20.970 [2024-12-05 17:14:55.248093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.970 [2024-12-05 17:14:55.248119] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:20.970 [2024-12-05 17:14:55.253070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.970 [2024-12-05 17:14:55.253122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:20.970 [2024-12-05 17:14:55.253141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.930 ms 00:28:20.970 [2024-12-05 17:14:55.253156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.970 [2024-12-05 17:14:55.253570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.970 [2024-12-05 17:14:55.253603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:20.970 [2024-12-05 17:14:55.253619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.377 ms 00:28:20.970 [2024-12-05 17:14:55.253633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.970 [2024-12-05 17:14:55.271573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.970 [2024-12-05 17:14:55.271615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:20.970 [2024-12-05 17:14:55.271627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.914 ms 00:28:20.970 [2024-12-05 17:14:55.271636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.970 [2024-12-05 17:14:55.277833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.970 [2024-12-05 17:14:55.277866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:20.970 [2024-12-05 17:14:55.277884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.170 ms 00:28:20.970 [2024-12-05 17:14:55.277892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.970 [2024-12-05 17:14:55.303430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.970 [2024-12-05 17:14:55.303470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:20.970 [2024-12-05 17:14:55.303482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.496 ms 00:28:20.970 [2024-12-05 17:14:55.303490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.970 [2024-12-05 17:14:55.319120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.970 [2024-12-05 17:14:55.319160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:20.970 [2024-12-05 17:14:55.319172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.592 ms 00:28:20.970 [2024-12-05 17:14:55.319180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:20.970 [2024-12-05 17:14:55.321994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:20.970 [2024-12-05 17:14:55.322033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:20.970 [2024-12-05 17:14:55.322045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.770 ms 00:28:20.970 [2024-12-05 17:14:55.322058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.232 [2024-12-05 17:14:55.347848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.232 [2024-12-05 17:14:55.347903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:21.232 [2024-12-05 17:14:55.347915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.774 ms 00:28:21.232 [2024-12-05 17:14:55.347922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.232 [2024-12-05 17:14:55.372955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.232 [2024-12-05 17:14:55.373000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:21.232 [2024-12-05 17:14:55.373012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.966 ms 00:28:21.232 [2024-12-05 17:14:55.373020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.232 [2024-12-05 17:14:55.397138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.232 [2024-12-05 17:14:55.397182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:21.232 [2024-12-05 17:14:55.397193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.073 ms 00:28:21.232 [2024-12-05 17:14:55.397201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.232 [2024-12-05 17:14:55.421458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.232 [2024-12-05 17:14:55.421501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:21.232 [2024-12-05 17:14:55.421513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.187 ms 00:28:21.232 [2024-12-05 17:14:55.421520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.232 [2024-12-05 17:14:55.421562] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:21.232 [2024-12-05 17:14:55.421578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:21.232 [2024-12-05 17:14:55.421589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:21.232 [2024-12-05 17:14:55.421598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.421994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.422001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.422010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.422020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.422027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.422034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.422042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.422049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.422058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.422065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.422073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:21.232 [2024-12-05 17:14:55.422082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:21.233 [2024-12-05 17:14:55.422392] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:21.233 [2024-12-05 17:14:55.422401] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f6e51171-fa49-40ed-b714-ebb2439e8ed1 00:28:21.233 [2024-12-05 17:14:55.422409] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:21.233 [2024-12-05 17:14:55.422417] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 263872 00:28:21.233 [2024-12-05 17:14:55.422430] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 261888 00:28:21.233 [2024-12-05 17:14:55.422439] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0076 00:28:21.233 [2024-12-05 17:14:55.422446] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:21.233 [2024-12-05 17:14:55.422461] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:21.233 [2024-12-05 17:14:55.422468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:21.233 [2024-12-05 17:14:55.422475] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:21.233 [2024-12-05 17:14:55.422483] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:21.233 [2024-12-05 17:14:55.422491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.233 [2024-12-05 17:14:55.422498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:21.233 [2024-12-05 17:14:55.422507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.929 ms 00:28:21.233 [2024-12-05 17:14:55.422514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.233 [2024-12-05 17:14:55.435707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.233 [2024-12-05 17:14:55.435749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:21.233 [2024-12-05 17:14:55.435760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.174 ms 00:28:21.233 [2024-12-05 17:14:55.435769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.233 [2024-12-05 17:14:55.436200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.233 [2024-12-05 17:14:55.436216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:21.233 [2024-12-05 17:14:55.436225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.396 ms 00:28:21.233 [2024-12-05 17:14:55.436233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.233 [2024-12-05 17:14:55.472519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.233 [2024-12-05 17:14:55.472567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:21.233 [2024-12-05 17:14:55.472579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.233 [2024-12-05 17:14:55.472588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.233 [2024-12-05 17:14:55.472646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.233 [2024-12-05 17:14:55.472655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:21.233 [2024-12-05 17:14:55.472663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.233 [2024-12-05 17:14:55.472672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.233 [2024-12-05 17:14:55.472778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.233 [2024-12-05 17:14:55.472791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:21.233 [2024-12-05 17:14:55.472799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.233 [2024-12-05 17:14:55.472807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.233 [2024-12-05 17:14:55.472823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.233 [2024-12-05 17:14:55.472831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:21.233 [2024-12-05 17:14:55.472839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.233 [2024-12-05 17:14:55.472846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.233 [2024-12-05 17:14:55.557943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.233 [2024-12-05 17:14:55.558035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:21.233 [2024-12-05 17:14:55.558049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.233 [2024-12-05 17:14:55.558057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.495 [2024-12-05 17:14:55.627209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.495 [2024-12-05 17:14:55.627436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:21.495 [2024-12-05 17:14:55.627457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.495 [2024-12-05 17:14:55.627466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.495 [2024-12-05 17:14:55.627527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.495 [2024-12-05 17:14:55.627545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:21.495 [2024-12-05 17:14:55.627553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.495 [2024-12-05 17:14:55.627562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.495 [2024-12-05 17:14:55.627621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.495 [2024-12-05 17:14:55.627631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:21.495 [2024-12-05 17:14:55.627640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.495 [2024-12-05 17:14:55.627649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.495 [2024-12-05 17:14:55.627756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.495 [2024-12-05 17:14:55.627767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:21.495 [2024-12-05 17:14:55.627780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.495 [2024-12-05 17:14:55.627788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.495 [2024-12-05 17:14:55.627825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.495 [2024-12-05 17:14:55.627835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:21.495 [2024-12-05 17:14:55.627843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.495 [2024-12-05 17:14:55.627851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.495 [2024-12-05 17:14:55.627891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.495 [2024-12-05 17:14:55.627901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:21.495 [2024-12-05 17:14:55.627912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.495 [2024-12-05 17:14:55.627920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.495 [2024-12-05 17:14:55.628005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:21.495 [2024-12-05 17:14:55.628017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:21.495 [2024-12-05 17:14:55.628025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:21.495 [2024-12-05 17:14:55.628034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.495 [2024-12-05 17:14:55.628168] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 380.168 ms, result 0 00:28:22.067 00:28:22.067 00:28:22.067 17:14:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:24.031 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:24.031 17:14:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:24.031 [2024-12-05 17:14:58.326332] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:28:24.031 [2024-12-05 17:14:58.326423] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81800 ] 00:28:24.293 [2024-12-05 17:14:58.474026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.293 [2024-12-05 17:14:58.583024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.555 [2024-12-05 17:14:58.878758] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:24.555 [2024-12-05 17:14:58.878846] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:24.819 [2024-12-05 17:14:59.040791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.819 [2024-12-05 17:14:59.040856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:24.819 [2024-12-05 17:14:59.040871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:24.819 [2024-12-05 17:14:59.040881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.819 [2024-12-05 17:14:59.040934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.819 [2024-12-05 17:14:59.040946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:24.819 [2024-12-05 17:14:59.040979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:28:24.819 [2024-12-05 17:14:59.040987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.819 [2024-12-05 17:14:59.041009] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:24.819 [2024-12-05 17:14:59.041859] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:24.819 [2024-12-05 17:14:59.041900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.819 [2024-12-05 17:14:59.041909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:24.819 [2024-12-05 17:14:59.041919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.897 ms 00:28:24.819 [2024-12-05 17:14:59.041927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.819 [2024-12-05 17:14:59.043688] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:24.819 [2024-12-05 17:14:59.058060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.819 [2024-12-05 17:14:59.058110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:24.819 [2024-12-05 17:14:59.058123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.373 ms 00:28:24.819 [2024-12-05 17:14:59.058131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.819 [2024-12-05 17:14:59.058211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.819 [2024-12-05 17:14:59.058222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:24.819 [2024-12-05 17:14:59.058231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:28:24.819 [2024-12-05 17:14:59.058239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.819 [2024-12-05 17:14:59.066236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.819 [2024-12-05 17:14:59.066277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:24.819 [2024-12-05 17:14:59.066288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.922 ms 00:28:24.819 [2024-12-05 17:14:59.066301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.819 [2024-12-05 17:14:59.066379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.819 [2024-12-05 17:14:59.066388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:24.819 [2024-12-05 17:14:59.066397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:28:24.819 [2024-12-05 17:14:59.066405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.819 [2024-12-05 17:14:59.066448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.819 [2024-12-05 17:14:59.066457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:24.819 [2024-12-05 17:14:59.066466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:24.819 [2024-12-05 17:14:59.066473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.819 [2024-12-05 17:14:59.066500] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:24.819 [2024-12-05 17:14:59.070533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.819 [2024-12-05 17:14:59.070718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:24.819 [2024-12-05 17:14:59.070744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.040 ms 00:28:24.819 [2024-12-05 17:14:59.070753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.819 [2024-12-05 17:14:59.070794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.819 [2024-12-05 17:14:59.070804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:24.819 [2024-12-05 17:14:59.070812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:28:24.819 [2024-12-05 17:14:59.070819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.819 [2024-12-05 17:14:59.070870] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:24.819 [2024-12-05 17:14:59.070894] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:24.819 [2024-12-05 17:14:59.070930] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:24.819 [2024-12-05 17:14:59.070971] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:24.819 [2024-12-05 17:14:59.071080] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:24.819 [2024-12-05 17:14:59.071092] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:24.819 [2024-12-05 17:14:59.071104] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:24.819 [2024-12-05 17:14:59.071114] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:24.819 [2024-12-05 17:14:59.071124] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:24.819 [2024-12-05 17:14:59.071132] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:24.819 [2024-12-05 17:14:59.071141] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:24.819 [2024-12-05 17:14:59.071152] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:24.819 [2024-12-05 17:14:59.071160] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:24.819 [2024-12-05 17:14:59.071168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.819 [2024-12-05 17:14:59.071176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:24.819 [2024-12-05 17:14:59.071184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:28:24.819 [2024-12-05 17:14:59.071192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.819 [2024-12-05 17:14:59.071282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.819 [2024-12-05 17:14:59.071290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:24.819 [2024-12-05 17:14:59.071298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:28:24.819 [2024-12-05 17:14:59.071306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.819 [2024-12-05 17:14:59.071415] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:24.819 [2024-12-05 17:14:59.071426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:24.819 [2024-12-05 17:14:59.071436] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:24.819 [2024-12-05 17:14:59.071444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:24.819 [2024-12-05 17:14:59.071452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:24.819 [2024-12-05 17:14:59.071459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:24.819 [2024-12-05 17:14:59.071466] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:24.819 [2024-12-05 17:14:59.071474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:24.819 [2024-12-05 17:14:59.071482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:24.819 [2024-12-05 17:14:59.071489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:24.819 [2024-12-05 17:14:59.071496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:24.819 [2024-12-05 17:14:59.071503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:24.819 [2024-12-05 17:14:59.071510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:24.819 [2024-12-05 17:14:59.071524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:24.819 [2024-12-05 17:14:59.071534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:24.819 [2024-12-05 17:14:59.071541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:24.819 [2024-12-05 17:14:59.071548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:24.819 [2024-12-05 17:14:59.071556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:24.819 [2024-12-05 17:14:59.071563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:24.819 [2024-12-05 17:14:59.071570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:24.819 [2024-12-05 17:14:59.071576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:24.819 [2024-12-05 17:14:59.071583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:24.819 [2024-12-05 17:14:59.071590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:24.819 [2024-12-05 17:14:59.071596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:24.819 [2024-12-05 17:14:59.071602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:24.819 [2024-12-05 17:14:59.071609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:24.819 [2024-12-05 17:14:59.071616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:24.819 [2024-12-05 17:14:59.071622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:24.819 [2024-12-05 17:14:59.071629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:24.819 [2024-12-05 17:14:59.071635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:24.819 [2024-12-05 17:14:59.071641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:24.819 [2024-12-05 17:14:59.071648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:24.819 [2024-12-05 17:14:59.071655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:24.819 [2024-12-05 17:14:59.071662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:24.819 [2024-12-05 17:14:59.071668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:24.819 [2024-12-05 17:14:59.071675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:24.819 [2024-12-05 17:14:59.071681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:24.819 [2024-12-05 17:14:59.071689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:24.820 [2024-12-05 17:14:59.071696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:24.820 [2024-12-05 17:14:59.071702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:24.820 [2024-12-05 17:14:59.071709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:24.820 [2024-12-05 17:14:59.071716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:24.820 [2024-12-05 17:14:59.071722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:24.820 [2024-12-05 17:14:59.071730] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:24.820 [2024-12-05 17:14:59.071738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:24.820 [2024-12-05 17:14:59.071746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:24.820 [2024-12-05 17:14:59.071755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:24.820 [2024-12-05 17:14:59.071763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:24.820 [2024-12-05 17:14:59.071770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:24.820 [2024-12-05 17:14:59.071777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:24.820 [2024-12-05 17:14:59.071784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:24.820 [2024-12-05 17:14:59.071791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:24.820 [2024-12-05 17:14:59.071797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:24.820 [2024-12-05 17:14:59.071806] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:24.820 [2024-12-05 17:14:59.071815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:24.820 [2024-12-05 17:14:59.071826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:24.820 [2024-12-05 17:14:59.071835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:24.820 [2024-12-05 17:14:59.071843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:24.820 [2024-12-05 17:14:59.071850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:24.820 [2024-12-05 17:14:59.071857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:24.820 [2024-12-05 17:14:59.071864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:24.820 [2024-12-05 17:14:59.071871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:24.820 [2024-12-05 17:14:59.071879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:24.820 [2024-12-05 17:14:59.071886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:24.820 [2024-12-05 17:14:59.071893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:24.820 [2024-12-05 17:14:59.071900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:24.820 [2024-12-05 17:14:59.071907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:24.820 [2024-12-05 17:14:59.071914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:24.820 [2024-12-05 17:14:59.071921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:24.820 [2024-12-05 17:14:59.071929] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:24.820 [2024-12-05 17:14:59.071937] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:24.820 [2024-12-05 17:14:59.071961] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:24.820 [2024-12-05 17:14:59.071969] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:24.820 [2024-12-05 17:14:59.071976] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:24.820 [2024-12-05 17:14:59.071983] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:24.820 [2024-12-05 17:14:59.071991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.820 [2024-12-05 17:14:59.071998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:24.820 [2024-12-05 17:14:59.072006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.647 ms 00:28:24.820 [2024-12-05 17:14:59.072016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.820 [2024-12-05 17:14:59.103761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.820 [2024-12-05 17:14:59.103814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:24.820 [2024-12-05 17:14:59.103827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.695 ms 00:28:24.820 [2024-12-05 17:14:59.103840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.820 [2024-12-05 17:14:59.103931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.820 [2024-12-05 17:14:59.103941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:24.820 [2024-12-05 17:14:59.103980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:28:24.820 [2024-12-05 17:14:59.103990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.820 [2024-12-05 17:14:59.149753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.820 [2024-12-05 17:14:59.149806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:24.820 [2024-12-05 17:14:59.149820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.701 ms 00:28:24.820 [2024-12-05 17:14:59.149828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.820 [2024-12-05 17:14:59.149876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.820 [2024-12-05 17:14:59.149886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:24.820 [2024-12-05 17:14:59.149900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:24.820 [2024-12-05 17:14:59.149908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.820 [2024-12-05 17:14:59.150530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.820 [2024-12-05 17:14:59.150561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:24.820 [2024-12-05 17:14:59.150571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:28:24.820 [2024-12-05 17:14:59.150580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.820 [2024-12-05 17:14:59.150735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.820 [2024-12-05 17:14:59.150746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:24.820 [2024-12-05 17:14:59.150760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:28:24.820 [2024-12-05 17:14:59.150768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.820 [2024-12-05 17:14:59.166307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.820 [2024-12-05 17:14:59.166496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:24.820 [2024-12-05 17:14:59.166515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.519 ms 00:28:24.820 [2024-12-05 17:14:59.166523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.820 [2024-12-05 17:14:59.181001] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:24.820 [2024-12-05 17:14:59.181177] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:24.820 [2024-12-05 17:14:59.181196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.820 [2024-12-05 17:14:59.181205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:24.820 [2024-12-05 17:14:59.181215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.560 ms 00:28:24.820 [2024-12-05 17:14:59.181222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.082 [2024-12-05 17:14:59.207313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.082 [2024-12-05 17:14:59.207379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:25.082 [2024-12-05 17:14:59.207392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.972 ms 00:28:25.082 [2024-12-05 17:14:59.207400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.082 [2024-12-05 17:14:59.220458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.082 [2024-12-05 17:14:59.220630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:25.082 [2024-12-05 17:14:59.220649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.994 ms 00:28:25.082 [2024-12-05 17:14:59.220657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.082 [2024-12-05 17:14:59.233059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.082 [2024-12-05 17:14:59.233104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:25.082 [2024-12-05 17:14:59.233115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.351 ms 00:28:25.082 [2024-12-05 17:14:59.233122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.082 [2024-12-05 17:14:59.233757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.082 [2024-12-05 17:14:59.233780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:25.082 [2024-12-05 17:14:59.233794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:28:25.082 [2024-12-05 17:14:59.233801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.082 [2024-12-05 17:14:59.301017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.082 [2024-12-05 17:14:59.301097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:25.082 [2024-12-05 17:14:59.301121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.197 ms 00:28:25.082 [2024-12-05 17:14:59.301130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.082 [2024-12-05 17:14:59.312529] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:25.082 [2024-12-05 17:14:59.315717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.082 [2024-12-05 17:14:59.315888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:25.082 [2024-12-05 17:14:59.315908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.531 ms 00:28:25.082 [2024-12-05 17:14:59.315918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.082 [2024-12-05 17:14:59.316023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.082 [2024-12-05 17:14:59.316036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:25.082 [2024-12-05 17:14:59.316049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:28:25.082 [2024-12-05 17:14:59.316057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.082 [2024-12-05 17:14:59.316872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.082 [2024-12-05 17:14:59.316909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:25.082 [2024-12-05 17:14:59.316922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.776 ms 00:28:25.082 [2024-12-05 17:14:59.316933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.082 [2024-12-05 17:14:59.316978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.082 [2024-12-05 17:14:59.316990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:25.082 [2024-12-05 17:14:59.317000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:25.082 [2024-12-05 17:14:59.317009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.082 [2024-12-05 17:14:59.317054] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:25.082 [2024-12-05 17:14:59.317066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.082 [2024-12-05 17:14:59.317076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:25.082 [2024-12-05 17:14:59.317086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:25.082 [2024-12-05 17:14:59.317094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.082 [2024-12-05 17:14:59.342727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.082 [2024-12-05 17:14:59.342909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:25.082 [2024-12-05 17:14:59.342938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.613 ms 00:28:25.082 [2024-12-05 17:14:59.342970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.082 [2024-12-05 17:14:59.343049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:25.082 [2024-12-05 17:14:59.343060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:25.082 [2024-12-05 17:14:59.343071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:28:25.082 [2024-12-05 17:14:59.343079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:25.082 [2024-12-05 17:14:59.344528] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 303.256 ms, result 0 00:28:26.471  [2024-12-05T17:15:01.784Z] Copying: 18/1024 [MB] (18 MBps) [2024-12-05T17:15:02.727Z] Copying: 38/1024 [MB] (20 MBps) [2024-12-05T17:15:03.672Z] Copying: 60/1024 [MB] (22 MBps) [2024-12-05T17:15:04.616Z] Copying: 83/1024 [MB] (22 MBps) [2024-12-05T17:15:05.572Z] Copying: 103/1024 [MB] (20 MBps) [2024-12-05T17:15:06.959Z] Copying: 130/1024 [MB] (26 MBps) [2024-12-05T17:15:07.532Z] Copying: 151/1024 [MB] (21 MBps) [2024-12-05T17:15:08.921Z] Copying: 171/1024 [MB] (20 MBps) [2024-12-05T17:15:09.863Z] Copying: 186/1024 [MB] (15 MBps) [2024-12-05T17:15:10.806Z] Copying: 206/1024 [MB] (19 MBps) [2024-12-05T17:15:11.750Z] Copying: 227/1024 [MB] (21 MBps) [2024-12-05T17:15:12.695Z] Copying: 246/1024 [MB] (19 MBps) [2024-12-05T17:15:13.639Z] Copying: 263/1024 [MB] (16 MBps) [2024-12-05T17:15:14.581Z] Copying: 285/1024 [MB] (22 MBps) [2024-12-05T17:15:15.524Z] Copying: 302/1024 [MB] (16 MBps) [2024-12-05T17:15:16.910Z] Copying: 323/1024 [MB] (21 MBps) [2024-12-05T17:15:17.852Z] Copying: 342/1024 [MB] (19 MBps) [2024-12-05T17:15:18.793Z] Copying: 366/1024 [MB] (23 MBps) [2024-12-05T17:15:19.738Z] Copying: 385/1024 [MB] (19 MBps) [2024-12-05T17:15:20.681Z] Copying: 398/1024 [MB] (13 MBps) [2024-12-05T17:15:21.624Z] Copying: 415/1024 [MB] (17 MBps) [2024-12-05T17:15:22.569Z] Copying: 437/1024 [MB] (22 MBps) [2024-12-05T17:15:23.958Z] Copying: 448/1024 [MB] (10 MBps) [2024-12-05T17:15:24.531Z] Copying: 460/1024 [MB] (12 MBps) [2024-12-05T17:15:25.920Z] Copying: 474/1024 [MB] (14 MBps) [2024-12-05T17:15:26.865Z] Copying: 485/1024 [MB] (10 MBps) [2024-12-05T17:15:27.811Z] Copying: 502/1024 [MB] (17 MBps) [2024-12-05T17:15:28.782Z] Copying: 519/1024 [MB] (16 MBps) [2024-12-05T17:15:29.827Z] Copying: 531/1024 [MB] (11 MBps) [2024-12-05T17:15:30.771Z] Copying: 544/1024 [MB] (13 MBps) [2024-12-05T17:15:31.715Z] Copying: 556/1024 [MB] (12 MBps) [2024-12-05T17:15:32.660Z] Copying: 570/1024 [MB] (14 MBps) [2024-12-05T17:15:33.605Z] Copying: 581/1024 [MB] (10 MBps) [2024-12-05T17:15:34.549Z] Copying: 595/1024 [MB] (14 MBps) [2024-12-05T17:15:35.936Z] Copying: 606/1024 [MB] (11 MBps) [2024-12-05T17:15:36.878Z] Copying: 622/1024 [MB] (15 MBps) [2024-12-05T17:15:37.822Z] Copying: 639/1024 [MB] (16 MBps) [2024-12-05T17:15:38.793Z] Copying: 653/1024 [MB] (14 MBps) [2024-12-05T17:15:39.734Z] Copying: 666/1024 [MB] (12 MBps) [2024-12-05T17:15:40.691Z] Copying: 678/1024 [MB] (12 MBps) [2024-12-05T17:15:41.633Z] Copying: 693/1024 [MB] (15 MBps) [2024-12-05T17:15:42.628Z] Copying: 705/1024 [MB] (11 MBps) [2024-12-05T17:15:43.568Z] Copying: 718/1024 [MB] (13 MBps) [2024-12-05T17:15:44.955Z] Copying: 732/1024 [MB] (14 MBps) [2024-12-05T17:15:45.528Z] Copying: 749/1024 [MB] (17 MBps) [2024-12-05T17:15:46.913Z] Copying: 765/1024 [MB] (16 MBps) [2024-12-05T17:15:47.857Z] Copying: 783/1024 [MB] (17 MBps) [2024-12-05T17:15:48.803Z] Copying: 802/1024 [MB] (18 MBps) [2024-12-05T17:15:49.748Z] Copying: 821/1024 [MB] (18 MBps) [2024-12-05T17:15:50.695Z] Copying: 838/1024 [MB] (17 MBps) [2024-12-05T17:15:51.639Z] Copying: 849/1024 [MB] (11 MBps) [2024-12-05T17:15:52.581Z] Copying: 868/1024 [MB] (18 MBps) [2024-12-05T17:15:53.550Z] Copying: 880/1024 [MB] (12 MBps) [2024-12-05T17:15:54.936Z] Copying: 893/1024 [MB] (12 MBps) [2024-12-05T17:15:55.879Z] Copying: 908/1024 [MB] (14 MBps) [2024-12-05T17:15:56.825Z] Copying: 919/1024 [MB] (10 MBps) [2024-12-05T17:15:57.769Z] Copying: 932/1024 [MB] (12 MBps) [2024-12-05T17:15:58.713Z] Copying: 960/1024 [MB] (28 MBps) [2024-12-05T17:15:59.656Z] Copying: 976/1024 [MB] (15 MBps) [2024-12-05T17:16:00.602Z] Copying: 997/1024 [MB] (20 MBps) [2024-12-05T17:16:01.633Z] Copying: 1008/1024 [MB] (11 MBps) [2024-12-05T17:16:01.633Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-05 17:16:01.301037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.266 [2024-12-05 17:16:01.301092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:27.266 [2024-12-05 17:16:01.301106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:27.266 [2024-12-05 17:16:01.301114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.266 [2024-12-05 17:16:01.301136] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:27.266 [2024-12-05 17:16:01.304249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.266 [2024-12-05 17:16:01.304288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:27.266 [2024-12-05 17:16:01.304299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.099 ms 00:29:27.266 [2024-12-05 17:16:01.304307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.266 [2024-12-05 17:16:01.304523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.266 [2024-12-05 17:16:01.304534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:27.266 [2024-12-05 17:16:01.304542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:29:27.266 [2024-12-05 17:16:01.304549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.266 [2024-12-05 17:16:01.308392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.266 [2024-12-05 17:16:01.308488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:27.266 [2024-12-05 17:16:01.308544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.827 ms 00:29:27.266 [2024-12-05 17:16:01.308573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.267 [2024-12-05 17:16:01.314793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.267 [2024-12-05 17:16:01.314919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:27.267 [2024-12-05 17:16:01.314997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.185 ms 00:29:27.267 [2024-12-05 17:16:01.315021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.267 [2024-12-05 17:16:01.340423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.267 [2024-12-05 17:16:01.340575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:27.267 [2024-12-05 17:16:01.340639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.332 ms 00:29:27.267 [2024-12-05 17:16:01.340662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.267 [2024-12-05 17:16:01.356954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.267 [2024-12-05 17:16:01.357118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:27.267 [2024-12-05 17:16:01.357137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.226 ms 00:29:27.267 [2024-12-05 17:16:01.357146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.267 [2024-12-05 17:16:01.362179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.267 [2024-12-05 17:16:01.362229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:27.267 [2024-12-05 17:16:01.362240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.917 ms 00:29:27.267 [2024-12-05 17:16:01.362248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.267 [2024-12-05 17:16:01.388667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.267 [2024-12-05 17:16:01.388739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:27.267 [2024-12-05 17:16:01.388752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.403 ms 00:29:27.267 [2024-12-05 17:16:01.388758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.267 [2024-12-05 17:16:01.415205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.267 [2024-12-05 17:16:01.415387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:27.267 [2024-12-05 17:16:01.415408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.396 ms 00:29:27.267 [2024-12-05 17:16:01.415416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.267 [2024-12-05 17:16:01.440940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.267 [2024-12-05 17:16:01.441002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:27.267 [2024-12-05 17:16:01.441014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.484 ms 00:29:27.267 [2024-12-05 17:16:01.441021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.267 [2024-12-05 17:16:01.466824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.267 [2024-12-05 17:16:01.467036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:27.267 [2024-12-05 17:16:01.467056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.709 ms 00:29:27.267 [2024-12-05 17:16:01.467063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.267 [2024-12-05 17:16:01.467179] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:27.267 [2024-12-05 17:16:01.467224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:27.267 [2024-12-05 17:16:01.467239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:27.267 [2024-12-05 17:16:01.467248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:27.267 [2024-12-05 17:16:01.467670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.467996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.468004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:27.268 [2024-12-05 17:16:01.468020] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:27.268 [2024-12-05 17:16:01.468029] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f6e51171-fa49-40ed-b714-ebb2439e8ed1 00:29:27.268 [2024-12-05 17:16:01.468037] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:27.268 [2024-12-05 17:16:01.468045] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:27.268 [2024-12-05 17:16:01.468052] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:27.268 [2024-12-05 17:16:01.468061] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:27.268 [2024-12-05 17:16:01.468077] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:27.268 [2024-12-05 17:16:01.468084] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:27.268 [2024-12-05 17:16:01.468093] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:27.268 [2024-12-05 17:16:01.468099] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:27.268 [2024-12-05 17:16:01.468106] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:27.268 [2024-12-05 17:16:01.468114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.268 [2024-12-05 17:16:01.468122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:27.268 [2024-12-05 17:16:01.468131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.939 ms 00:29:27.268 [2024-12-05 17:16:01.468142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.268 [2024-12-05 17:16:01.481924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.268 [2024-12-05 17:16:01.481988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:27.268 [2024-12-05 17:16:01.482001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.747 ms 00:29:27.268 [2024-12-05 17:16:01.482008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.268 [2024-12-05 17:16:01.482414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.268 [2024-12-05 17:16:01.482433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:27.268 [2024-12-05 17:16:01.482443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:29:27.268 [2024-12-05 17:16:01.482450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.268 [2024-12-05 17:16:01.519244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.268 [2024-12-05 17:16:01.519297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:27.268 [2024-12-05 17:16:01.519311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.268 [2024-12-05 17:16:01.519320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.268 [2024-12-05 17:16:01.519381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.268 [2024-12-05 17:16:01.519396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:27.268 [2024-12-05 17:16:01.519405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.268 [2024-12-05 17:16:01.519415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.268 [2024-12-05 17:16:01.519488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.268 [2024-12-05 17:16:01.519500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:27.268 [2024-12-05 17:16:01.519509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.268 [2024-12-05 17:16:01.519518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.268 [2024-12-05 17:16:01.519535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.268 [2024-12-05 17:16:01.519545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:27.268 [2024-12-05 17:16:01.519558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.268 [2024-12-05 17:16:01.519567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.268 [2024-12-05 17:16:01.605824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.268 [2024-12-05 17:16:01.606104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:27.268 [2024-12-05 17:16:01.606128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.268 [2024-12-05 17:16:01.606137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.530 [2024-12-05 17:16:01.676129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.530 [2024-12-05 17:16:01.676194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:27.530 [2024-12-05 17:16:01.676207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.530 [2024-12-05 17:16:01.676215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.530 [2024-12-05 17:16:01.676302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.530 [2024-12-05 17:16:01.676312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:27.530 [2024-12-05 17:16:01.676321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.530 [2024-12-05 17:16:01.676330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.530 [2024-12-05 17:16:01.676370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.530 [2024-12-05 17:16:01.676380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:27.530 [2024-12-05 17:16:01.676389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.530 [2024-12-05 17:16:01.676403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.530 [2024-12-05 17:16:01.676501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.530 [2024-12-05 17:16:01.676511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:27.530 [2024-12-05 17:16:01.676521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.530 [2024-12-05 17:16:01.676529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.530 [2024-12-05 17:16:01.676562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.530 [2024-12-05 17:16:01.676572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:27.530 [2024-12-05 17:16:01.676581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.530 [2024-12-05 17:16:01.676589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.530 [2024-12-05 17:16:01.676636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.530 [2024-12-05 17:16:01.676645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:27.530 [2024-12-05 17:16:01.676654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.530 [2024-12-05 17:16:01.676663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.530 [2024-12-05 17:16:01.676725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:27.530 [2024-12-05 17:16:01.676736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:27.530 [2024-12-05 17:16:01.676744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:27.530 [2024-12-05 17:16:01.676756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.530 [2024-12-05 17:16:01.676892] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 375.814 ms, result 0 00:29:28.102 00:29:28.102 00:29:28.102 17:16:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:30.651 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:30.651 17:16:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:30.651 17:16:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:30.651 17:16:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:30.651 17:16:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:30.651 17:16:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:30.651 17:16:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:30.651 17:16:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:30.651 Process with pid 79983 is not found 00:29:30.651 17:16:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 79983 00:29:30.651 17:16:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 79983 ']' 00:29:30.651 17:16:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 79983 00:29:30.651 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79983) - No such process 00:29:30.651 17:16:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 79983 is not found' 00:29:30.651 17:16:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:30.651 Remove shared memory files 00:29:30.651 17:16:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:30.651 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:30.651 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:30.651 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:30.651 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:30.912 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:30.912 17:16:05 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:30.912 ************************************ 00:29:30.912 END TEST ftl_dirty_shutdown 00:29:30.912 ************************************ 00:29:30.912 00:29:30.912 real 3m58.097s 00:29:30.912 user 4m17.528s 00:29:30.912 sys 0m25.973s 00:29:30.912 17:16:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:30.912 17:16:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:30.912 17:16:05 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:30.912 17:16:05 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:30.912 17:16:05 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:30.912 17:16:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:30.912 ************************************ 00:29:30.912 START TEST ftl_upgrade_shutdown 00:29:30.912 ************************************ 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:30.912 * Looking for test storage... 00:29:30.912 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:30.912 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:30.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.913 --rc genhtml_branch_coverage=1 00:29:30.913 --rc genhtml_function_coverage=1 00:29:30.913 --rc genhtml_legend=1 00:29:30.913 --rc geninfo_all_blocks=1 00:29:30.913 --rc geninfo_unexecuted_blocks=1 00:29:30.913 00:29:30.913 ' 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:30.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.913 --rc genhtml_branch_coverage=1 00:29:30.913 --rc genhtml_function_coverage=1 00:29:30.913 --rc genhtml_legend=1 00:29:30.913 --rc geninfo_all_blocks=1 00:29:30.913 --rc geninfo_unexecuted_blocks=1 00:29:30.913 00:29:30.913 ' 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:30.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.913 --rc genhtml_branch_coverage=1 00:29:30.913 --rc genhtml_function_coverage=1 00:29:30.913 --rc genhtml_legend=1 00:29:30.913 --rc geninfo_all_blocks=1 00:29:30.913 --rc geninfo_unexecuted_blocks=1 00:29:30.913 00:29:30.913 ' 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:30.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:30.913 --rc genhtml_branch_coverage=1 00:29:30.913 --rc genhtml_function_coverage=1 00:29:30.913 --rc genhtml_legend=1 00:29:30.913 --rc geninfo_all_blocks=1 00:29:30.913 --rc geninfo_unexecuted_blocks=1 00:29:30.913 00:29:30.913 ' 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82548 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82548 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82548 ']' 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:30.913 17:16:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:31.173 [2024-12-05 17:16:05.360558] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:29:31.173 [2024-12-05 17:16:05.361132] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82548 ] 00:29:31.173 [2024-12-05 17:16:05.527372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.434 [2024-12-05 17:16:05.648582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.007 17:16:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:32.007 17:16:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:32.007 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:32.007 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:32.007 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:32.007 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:32.007 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:32.007 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:32.007 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:32.007 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:32.007 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:32.007 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:32.007 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:32.007 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:32.007 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:32.007 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:32.007 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:32.007 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:32.007 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:32.007 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:32.007 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:32.007 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:32.007 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:32.313 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:32.313 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:32.313 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:32.313 17:16:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:29:32.313 17:16:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:32.313 17:16:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:32.313 17:16:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:32.313 17:16:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:32.573 17:16:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:32.573 { 00:29:32.573 "name": "basen1", 00:29:32.573 "aliases": [ 00:29:32.573 "83b97cc5-7f63-4ee1-921d-5f54f9a5d097" 00:29:32.573 ], 00:29:32.573 "product_name": "NVMe disk", 00:29:32.573 "block_size": 4096, 00:29:32.573 "num_blocks": 1310720, 00:29:32.573 "uuid": "83b97cc5-7f63-4ee1-921d-5f54f9a5d097", 00:29:32.573 "numa_id": -1, 00:29:32.573 "assigned_rate_limits": { 00:29:32.573 "rw_ios_per_sec": 0, 00:29:32.573 "rw_mbytes_per_sec": 0, 00:29:32.573 "r_mbytes_per_sec": 0, 00:29:32.573 "w_mbytes_per_sec": 0 00:29:32.573 }, 00:29:32.573 "claimed": true, 00:29:32.573 "claim_type": "read_many_write_one", 00:29:32.573 "zoned": false, 00:29:32.573 "supported_io_types": { 00:29:32.573 "read": true, 00:29:32.573 "write": true, 00:29:32.573 "unmap": true, 00:29:32.573 "flush": true, 00:29:32.573 "reset": true, 00:29:32.573 "nvme_admin": true, 00:29:32.573 "nvme_io": true, 00:29:32.573 "nvme_io_md": false, 00:29:32.573 "write_zeroes": true, 00:29:32.573 "zcopy": false, 00:29:32.573 "get_zone_info": false, 00:29:32.573 "zone_management": false, 00:29:32.573 "zone_append": false, 00:29:32.573 "compare": true, 00:29:32.573 "compare_and_write": false, 00:29:32.573 "abort": true, 00:29:32.573 "seek_hole": false, 00:29:32.573 "seek_data": false, 00:29:32.573 "copy": true, 00:29:32.573 "nvme_iov_md": false 00:29:32.573 }, 00:29:32.573 "driver_specific": { 00:29:32.573 "nvme": [ 00:29:32.573 { 00:29:32.573 "pci_address": "0000:00:11.0", 00:29:32.573 "trid": { 00:29:32.573 "trtype": "PCIe", 00:29:32.573 "traddr": "0000:00:11.0" 00:29:32.573 }, 00:29:32.573 "ctrlr_data": { 00:29:32.573 "cntlid": 0, 00:29:32.573 "vendor_id": "0x1b36", 00:29:32.573 "model_number": "QEMU NVMe Ctrl", 00:29:32.573 "serial_number": "12341", 00:29:32.573 "firmware_revision": "8.0.0", 00:29:32.573 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:32.573 "oacs": { 00:29:32.573 "security": 0, 00:29:32.573 "format": 1, 00:29:32.573 "firmware": 0, 00:29:32.573 "ns_manage": 1 00:29:32.573 }, 00:29:32.573 "multi_ctrlr": false, 00:29:32.573 "ana_reporting": false 00:29:32.573 }, 00:29:32.573 "vs": { 00:29:32.573 "nvme_version": "1.4" 00:29:32.573 }, 00:29:32.573 "ns_data": { 00:29:32.573 "id": 1, 00:29:32.573 "can_share": false 00:29:32.573 } 00:29:32.573 } 00:29:32.573 ], 00:29:32.573 "mp_policy": "active_passive" 00:29:32.573 } 00:29:32.573 } 00:29:32.573 ]' 00:29:32.573 17:16:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:32.573 17:16:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:32.573 17:16:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:32.573 17:16:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:32.573 17:16:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:32.573 17:16:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:32.573 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:32.573 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:32.574 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:32.574 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:32.574 17:16:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:32.834 17:16:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=ab05872b-b525-4194-a772-7457ea4b2232 00:29:32.834 17:16:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:32.834 17:16:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ab05872b-b525-4194-a772-7457ea4b2232 00:29:33.095 17:16:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:33.356 17:16:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=d94f0693-dc4f-4e92-b5ee-feabb40b6ef8 00:29:33.356 17:16:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u d94f0693-dc4f-4e92-b5ee-feabb40b6ef8 00:29:33.616 17:16:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=fd4297a5-7ffa-4cc4-9952-67eb488c76b1 00:29:33.616 17:16:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z fd4297a5-7ffa-4cc4-9952-67eb488c76b1 ]] 00:29:33.616 17:16:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 fd4297a5-7ffa-4cc4-9952-67eb488c76b1 5120 00:29:33.616 17:16:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:33.616 17:16:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:33.616 17:16:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=fd4297a5-7ffa-4cc4-9952-67eb488c76b1 00:29:33.616 17:16:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:33.616 17:16:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size fd4297a5-7ffa-4cc4-9952-67eb488c76b1 00:29:33.616 17:16:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=fd4297a5-7ffa-4cc4-9952-67eb488c76b1 00:29:33.616 17:16:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:33.616 17:16:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:33.616 17:16:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:33.616 17:16:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b fd4297a5-7ffa-4cc4-9952-67eb488c76b1 00:29:33.876 17:16:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:33.876 { 00:29:33.876 "name": "fd4297a5-7ffa-4cc4-9952-67eb488c76b1", 00:29:33.876 "aliases": [ 00:29:33.876 "lvs/basen1p0" 00:29:33.876 ], 00:29:33.876 "product_name": "Logical Volume", 00:29:33.876 "block_size": 4096, 00:29:33.876 "num_blocks": 5242880, 00:29:33.876 "uuid": "fd4297a5-7ffa-4cc4-9952-67eb488c76b1", 00:29:33.876 "assigned_rate_limits": { 00:29:33.876 "rw_ios_per_sec": 0, 00:29:33.876 "rw_mbytes_per_sec": 0, 00:29:33.876 "r_mbytes_per_sec": 0, 00:29:33.876 "w_mbytes_per_sec": 0 00:29:33.876 }, 00:29:33.876 "claimed": false, 00:29:33.876 "zoned": false, 00:29:33.876 "supported_io_types": { 00:29:33.876 "read": true, 00:29:33.876 "write": true, 00:29:33.876 "unmap": true, 00:29:33.876 "flush": false, 00:29:33.876 "reset": true, 00:29:33.876 "nvme_admin": false, 00:29:33.876 "nvme_io": false, 00:29:33.876 "nvme_io_md": false, 00:29:33.876 "write_zeroes": true, 00:29:33.876 "zcopy": false, 00:29:33.877 "get_zone_info": false, 00:29:33.877 "zone_management": false, 00:29:33.877 "zone_append": false, 00:29:33.877 "compare": false, 00:29:33.877 "compare_and_write": false, 00:29:33.877 "abort": false, 00:29:33.877 "seek_hole": true, 00:29:33.877 "seek_data": true, 00:29:33.877 "copy": false, 00:29:33.877 "nvme_iov_md": false 00:29:33.877 }, 00:29:33.877 "driver_specific": { 00:29:33.877 "lvol": { 00:29:33.877 "lvol_store_uuid": "d94f0693-dc4f-4e92-b5ee-feabb40b6ef8", 00:29:33.877 "base_bdev": "basen1", 00:29:33.877 "thin_provision": true, 00:29:33.877 "num_allocated_clusters": 0, 00:29:33.877 "snapshot": false, 00:29:33.877 "clone": false, 00:29:33.877 "esnap_clone": false 00:29:33.877 } 00:29:33.877 } 00:29:33.877 } 00:29:33.877 ]' 00:29:33.877 17:16:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:33.877 17:16:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:33.877 17:16:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:33.877 17:16:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:29:33.877 17:16:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:29:33.877 17:16:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:29:33.877 17:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:33.877 17:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:33.877 17:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:34.137 17:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:34.137 17:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:34.137 17:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:34.398 17:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:34.398 17:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:34.398 17:16:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d fd4297a5-7ffa-4cc4-9952-67eb488c76b1 -c cachen1p0 --l2p_dram_limit 2 00:29:34.661 [2024-12-05 17:16:08.800434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.661 [2024-12-05 17:16:08.800474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:34.661 [2024-12-05 17:16:08.800486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:34.661 [2024-12-05 17:16:08.800493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.661 [2024-12-05 17:16:08.800537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.661 [2024-12-05 17:16:08.800545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:34.661 [2024-12-05 17:16:08.800553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:29:34.661 [2024-12-05 17:16:08.800559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.661 [2024-12-05 17:16:08.800575] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:34.661 [2024-12-05 17:16:08.801168] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:34.661 [2024-12-05 17:16:08.801185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.661 [2024-12-05 17:16:08.801191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:34.661 [2024-12-05 17:16:08.801200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.612 ms 00:29:34.661 [2024-12-05 17:16:08.801206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.661 [2024-12-05 17:16:08.801230] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 8a3521ab-2d4e-46df-a40d-788f9cd87bb7 00:29:34.661 [2024-12-05 17:16:08.802205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.661 [2024-12-05 17:16:08.802228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:34.661 [2024-12-05 17:16:08.802236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:29:34.661 [2024-12-05 17:16:08.802243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.661 [2024-12-05 17:16:08.807001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.661 [2024-12-05 17:16:08.807030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:34.661 [2024-12-05 17:16:08.807037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.705 ms 00:29:34.661 [2024-12-05 17:16:08.807044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.661 [2024-12-05 17:16:08.807074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.661 [2024-12-05 17:16:08.807083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:34.661 [2024-12-05 17:16:08.807089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:29:34.661 [2024-12-05 17:16:08.807097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.661 [2024-12-05 17:16:08.807126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.661 [2024-12-05 17:16:08.807134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:34.661 [2024-12-05 17:16:08.807142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:34.661 [2024-12-05 17:16:08.807149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.661 [2024-12-05 17:16:08.807165] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:34.661 [2024-12-05 17:16:08.810089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.661 [2024-12-05 17:16:08.810113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:34.661 [2024-12-05 17:16:08.810123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.927 ms 00:29:34.661 [2024-12-05 17:16:08.810130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.661 [2024-12-05 17:16:08.810152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.661 [2024-12-05 17:16:08.810159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:34.661 [2024-12-05 17:16:08.810166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:34.661 [2024-12-05 17:16:08.810172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.661 [2024-12-05 17:16:08.810192] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:34.661 [2024-12-05 17:16:08.810300] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:34.661 [2024-12-05 17:16:08.810312] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:34.661 [2024-12-05 17:16:08.810320] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:34.661 [2024-12-05 17:16:08.810329] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:34.661 [2024-12-05 17:16:08.810336] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:34.661 [2024-12-05 17:16:08.810344] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:34.661 [2024-12-05 17:16:08.810349] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:34.661 [2024-12-05 17:16:08.810358] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:34.661 [2024-12-05 17:16:08.810364] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:34.661 [2024-12-05 17:16:08.810370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.661 [2024-12-05 17:16:08.810376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:34.661 [2024-12-05 17:16:08.810383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.180 ms 00:29:34.661 [2024-12-05 17:16:08.810389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.661 [2024-12-05 17:16:08.810454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.661 [2024-12-05 17:16:08.810465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:34.661 [2024-12-05 17:16:08.810472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:29:34.661 [2024-12-05 17:16:08.810477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.661 [2024-12-05 17:16:08.810560] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:34.661 [2024-12-05 17:16:08.810567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:34.661 [2024-12-05 17:16:08.810574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:34.661 [2024-12-05 17:16:08.810581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.661 [2024-12-05 17:16:08.810589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:34.661 [2024-12-05 17:16:08.810594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:34.661 [2024-12-05 17:16:08.810601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:34.661 [2024-12-05 17:16:08.810606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:34.661 [2024-12-05 17:16:08.810612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:34.661 [2024-12-05 17:16:08.810617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.661 [2024-12-05 17:16:08.810624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:34.661 [2024-12-05 17:16:08.810630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:34.661 [2024-12-05 17:16:08.810636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.661 [2024-12-05 17:16:08.810642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:34.661 [2024-12-05 17:16:08.810649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:34.661 [2024-12-05 17:16:08.810655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.662 [2024-12-05 17:16:08.810664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:34.662 [2024-12-05 17:16:08.810669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:34.662 [2024-12-05 17:16:08.810675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.662 [2024-12-05 17:16:08.810680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:34.662 [2024-12-05 17:16:08.810686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:34.662 [2024-12-05 17:16:08.810691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:34.662 [2024-12-05 17:16:08.810697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:34.662 [2024-12-05 17:16:08.810702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:34.662 [2024-12-05 17:16:08.810709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:34.662 [2024-12-05 17:16:08.810714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:34.662 [2024-12-05 17:16:08.810720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:34.662 [2024-12-05 17:16:08.810725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:34.662 [2024-12-05 17:16:08.810731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:34.662 [2024-12-05 17:16:08.810736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:34.662 [2024-12-05 17:16:08.810742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:34.662 [2024-12-05 17:16:08.810747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:34.662 [2024-12-05 17:16:08.810754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:34.662 [2024-12-05 17:16:08.810759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.662 [2024-12-05 17:16:08.810765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:34.662 [2024-12-05 17:16:08.810770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:34.662 [2024-12-05 17:16:08.810777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.662 [2024-12-05 17:16:08.810782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:34.662 [2024-12-05 17:16:08.810788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:34.662 [2024-12-05 17:16:08.810793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.662 [2024-12-05 17:16:08.810799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:34.662 [2024-12-05 17:16:08.810804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:34.662 [2024-12-05 17:16:08.810810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.662 [2024-12-05 17:16:08.810814] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:34.662 [2024-12-05 17:16:08.810821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:34.662 [2024-12-05 17:16:08.810826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:34.662 [2024-12-05 17:16:08.810833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:34.662 [2024-12-05 17:16:08.810840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:34.662 [2024-12-05 17:16:08.810848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:34.662 [2024-12-05 17:16:08.810852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:34.662 [2024-12-05 17:16:08.810859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:34.662 [2024-12-05 17:16:08.810864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:34.662 [2024-12-05 17:16:08.810871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:34.662 [2024-12-05 17:16:08.810878] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:34.662 [2024-12-05 17:16:08.810887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:34.662 [2024-12-05 17:16:08.810893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:34.662 [2024-12-05 17:16:08.810900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:34.662 [2024-12-05 17:16:08.810905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:34.662 [2024-12-05 17:16:08.810912] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:34.662 [2024-12-05 17:16:08.810917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:34.662 [2024-12-05 17:16:08.810924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:34.662 [2024-12-05 17:16:08.810929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:34.662 [2024-12-05 17:16:08.810937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:34.662 [2024-12-05 17:16:08.810943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:34.662 [2024-12-05 17:16:08.810966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:34.662 [2024-12-05 17:16:08.810972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:34.662 [2024-12-05 17:16:08.810979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:34.662 [2024-12-05 17:16:08.810984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:34.662 [2024-12-05 17:16:08.810991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:34.662 [2024-12-05 17:16:08.810997] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:34.662 [2024-12-05 17:16:08.811004] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:34.662 [2024-12-05 17:16:08.811010] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:34.662 [2024-12-05 17:16:08.811017] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:34.662 [2024-12-05 17:16:08.811022] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:34.662 [2024-12-05 17:16:08.811030] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:34.662 [2024-12-05 17:16:08.811035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:34.662 [2024-12-05 17:16:08.811043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:34.662 [2024-12-05 17:16:08.811049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.530 ms 00:29:34.662 [2024-12-05 17:16:08.811055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:34.662 [2024-12-05 17:16:08.811097] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:34.662 [2024-12-05 17:16:08.811108] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:38.872 [2024-12-05 17:16:13.235321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:38.872 [2024-12-05 17:16:13.235402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:38.872 [2024-12-05 17:16:13.235419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4424.206 ms 00:29:38.872 [2024-12-05 17:16:13.235431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.134 [2024-12-05 17:16:13.265310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.134 [2024-12-05 17:16:13.265372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:39.134 [2024-12-05 17:16:13.265385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.646 ms 00:29:39.134 [2024-12-05 17:16:13.265396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.134 [2024-12-05 17:16:13.265477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.134 [2024-12-05 17:16:13.265491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:39.134 [2024-12-05 17:16:13.265500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:39.134 [2024-12-05 17:16:13.265516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.134 [2024-12-05 17:16:13.299854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.134 [2024-12-05 17:16:13.300103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:39.134 [2024-12-05 17:16:13.300126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.302 ms 00:29:39.134 [2024-12-05 17:16:13.300137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.134 [2024-12-05 17:16:13.300176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.134 [2024-12-05 17:16:13.300192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:39.134 [2024-12-05 17:16:13.300201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:39.134 [2024-12-05 17:16:13.300211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.134 [2024-12-05 17:16:13.300798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.134 [2024-12-05 17:16:13.300828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:39.134 [2024-12-05 17:16:13.300846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.533 ms 00:29:39.134 [2024-12-05 17:16:13.300857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.134 [2024-12-05 17:16:13.300904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.134 [2024-12-05 17:16:13.300915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:39.134 [2024-12-05 17:16:13.300928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:39.134 [2024-12-05 17:16:13.300941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.134 [2024-12-05 17:16:13.319137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.134 [2024-12-05 17:16:13.319344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:39.134 [2024-12-05 17:16:13.319365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.147 ms 00:29:39.134 [2024-12-05 17:16:13.319376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.134 [2024-12-05 17:16:13.347674] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:39.134 [2024-12-05 17:16:13.349346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.134 [2024-12-05 17:16:13.349397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:39.134 [2024-12-05 17:16:13.349413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.851 ms 00:29:39.134 [2024-12-05 17:16:13.349423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.134 [2024-12-05 17:16:13.383853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.134 [2024-12-05 17:16:13.383910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:39.134 [2024-12-05 17:16:13.383929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.377 ms 00:29:39.134 [2024-12-05 17:16:13.383939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.134 [2024-12-05 17:16:13.384077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.134 [2024-12-05 17:16:13.384093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:39.134 [2024-12-05 17:16:13.384130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:29:39.134 [2024-12-05 17:16:13.384139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.134 [2024-12-05 17:16:13.410717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.134 [2024-12-05 17:16:13.410765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:39.134 [2024-12-05 17:16:13.410782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.514 ms 00:29:39.134 [2024-12-05 17:16:13.410790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.134 [2024-12-05 17:16:13.437141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.134 [2024-12-05 17:16:13.437195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:39.134 [2024-12-05 17:16:13.437211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.289 ms 00:29:39.134 [2024-12-05 17:16:13.437219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.134 [2024-12-05 17:16:13.437831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.134 [2024-12-05 17:16:13.437851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:39.134 [2024-12-05 17:16:13.437864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.558 ms 00:29:39.134 [2024-12-05 17:16:13.437874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.396 [2024-12-05 17:16:13.536648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.396 [2024-12-05 17:16:13.536710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:39.396 [2024-12-05 17:16:13.536732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 98.725 ms 00:29:39.396 [2024-12-05 17:16:13.536741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.396 [2024-12-05 17:16:13.565215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.396 [2024-12-05 17:16:13.565263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:39.396 [2024-12-05 17:16:13.565280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.353 ms 00:29:39.396 [2024-12-05 17:16:13.565288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.396 [2024-12-05 17:16:13.592487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.396 [2024-12-05 17:16:13.592540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:39.396 [2024-12-05 17:16:13.592557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.137 ms 00:29:39.396 [2024-12-05 17:16:13.592564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.396 [2024-12-05 17:16:13.619312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.396 [2024-12-05 17:16:13.619364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:39.396 [2024-12-05 17:16:13.619380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.687 ms 00:29:39.396 [2024-12-05 17:16:13.619389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.396 [2024-12-05 17:16:13.619449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.396 [2024-12-05 17:16:13.619459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:39.396 [2024-12-05 17:16:13.619473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:39.396 [2024-12-05 17:16:13.619481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.396 [2024-12-05 17:16:13.619587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:39.396 [2024-12-05 17:16:13.619601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:39.396 [2024-12-05 17:16:13.619613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:29:39.396 [2024-12-05 17:16:13.619621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:39.396 [2024-12-05 17:16:13.621018] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4819.848 ms, result 0 00:29:39.396 { 00:29:39.396 "name": "ftl", 00:29:39.396 "uuid": "8a3521ab-2d4e-46df-a40d-788f9cd87bb7" 00:29:39.396 } 00:29:39.396 17:16:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:39.657 [2024-12-05 17:16:13.839910] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:39.657 17:16:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:39.917 17:16:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:39.917 [2024-12-05 17:16:14.268409] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:40.177 17:16:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:40.177 [2024-12-05 17:16:14.481873] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:40.177 17:16:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:40.748 Fill FTL, iteration 1 00:29:40.748 17:16:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:40.748 17:16:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:40.748 17:16:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:40.748 17:16:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:40.748 17:16:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:40.748 17:16:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:40.748 17:16:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:40.748 17:16:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:40.748 17:16:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:40.749 17:16:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:40.749 17:16:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:40.749 17:16:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:40.749 17:16:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:40.749 17:16:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:40.749 17:16:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:40.749 17:16:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:40.749 17:16:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:40.749 17:16:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=82681 00:29:40.749 17:16:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:40.749 17:16:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 82681 /var/tmp/spdk.tgt.sock 00:29:40.749 17:16:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82681 ']' 00:29:40.749 17:16:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:40.749 17:16:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:40.749 17:16:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:40.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:40.749 17:16:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:40.749 17:16:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:40.749 [2024-12-05 17:16:14.935751] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:29:40.749 [2024-12-05 17:16:14.936492] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82681 ] 00:29:40.749 [2024-12-05 17:16:15.098813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.008 [2024-12-05 17:16:15.217708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.579 17:16:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:41.579 17:16:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:41.579 17:16:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:41.841 ftln1 00:29:41.841 17:16:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:41.841 17:16:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:42.103 17:16:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:42.103 17:16:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 82681 00:29:42.103 17:16:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82681 ']' 00:29:42.103 17:16:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82681 00:29:42.103 17:16:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:42.103 17:16:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:42.103 17:16:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82681 00:29:42.103 killing process with pid 82681 00:29:42.103 17:16:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:42.103 17:16:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:42.103 17:16:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82681' 00:29:42.103 17:16:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82681 00:29:42.103 17:16:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82681 00:29:44.020 17:16:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:44.020 17:16:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:44.020 [2024-12-05 17:16:18.009882] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:29:44.020 [2024-12-05 17:16:18.010025] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82724 ] 00:29:44.020 [2024-12-05 17:16:18.167288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.020 [2024-12-05 17:16:18.252235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.405  [2024-12-05T17:16:20.715Z] Copying: 240/1024 [MB] (240 MBps) [2024-12-05T17:16:21.658Z] Copying: 484/1024 [MB] (244 MBps) [2024-12-05T17:16:22.598Z] Copying: 731/1024 [MB] (247 MBps) [2024-12-05T17:16:22.857Z] Copying: 976/1024 [MB] (245 MBps) [2024-12-05T17:16:23.428Z] Copying: 1024/1024 [MB] (average 243 MBps) 00:29:49.061 00:29:49.061 Calculate MD5 checksum, iteration 1 00:29:49.061 17:16:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:29:49.061 17:16:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:29:49.061 17:16:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:49.061 17:16:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:49.061 17:16:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:49.061 17:16:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:49.061 17:16:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:49.061 17:16:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:49.061 [2024-12-05 17:16:23.413557] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:29:49.061 [2024-12-05 17:16:23.413665] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82782 ] 00:29:49.322 [2024-12-05 17:16:23.570170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.322 [2024-12-05 17:16:23.648095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:50.705  [2024-12-05T17:16:25.642Z] Copying: 678/1024 [MB] (678 MBps) [2024-12-05T17:16:26.210Z] Copying: 1024/1024 [MB] (average 688 MBps) 00:29:51.843 00:29:51.843 17:16:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:29:51.843 17:16:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:53.773 17:16:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:53.774 Fill FTL, iteration 2 00:29:53.774 17:16:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=8315964310dbe37cda9f52286ac6676a 00:29:53.774 17:16:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:53.774 17:16:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:53.774 17:16:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:29:53.774 17:16:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:53.774 17:16:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:53.774 17:16:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:53.774 17:16:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:53.774 17:16:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:53.774 17:16:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:53.774 [2024-12-05 17:16:27.997696] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:29:53.774 [2024-12-05 17:16:27.997812] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82839 ] 00:29:54.036 [2024-12-05 17:16:28.153105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.036 [2024-12-05 17:16:28.233333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:55.423  [2024-12-05T17:16:30.732Z] Copying: 202/1024 [MB] (202 MBps) [2024-12-05T17:16:31.673Z] Copying: 416/1024 [MB] (214 MBps) [2024-12-05T17:16:32.614Z] Copying: 658/1024 [MB] (242 MBps) [2024-12-05T17:16:33.251Z] Copying: 903/1024 [MB] (245 MBps) [2024-12-05T17:16:33.847Z] Copying: 1024/1024 [MB] (average 228 MBps) 00:29:59.480 00:29:59.480 Calculate MD5 checksum, iteration 2 00:29:59.480 17:16:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:29:59.480 17:16:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:29:59.480 17:16:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:59.480 17:16:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:59.480 17:16:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:59.480 17:16:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:59.480 17:16:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:59.480 17:16:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:29:59.480 [2024-12-05 17:16:33.655184] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:29:59.480 [2024-12-05 17:16:33.655299] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82893 ] 00:29:59.480 [2024-12-05 17:16:33.808013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:59.741 [2024-12-05 17:16:33.885805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.126  [2024-12-05T17:16:35.753Z] Copying: 663/1024 [MB] (663 MBps) [2024-12-05T17:16:36.694Z] Copying: 1024/1024 [MB] (average 712 MBps) 00:30:02.327 00:30:02.327 17:16:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:02.327 17:16:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:04.241 17:16:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:04.241 17:16:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=8c3bb7b700d1b9aed43f8bc476c3c883 00:30:04.241 17:16:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:04.241 17:16:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:04.241 17:16:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:04.502 [2024-12-05 17:16:38.684553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.502 [2024-12-05 17:16:38.684700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:04.502 [2024-12-05 17:16:38.684717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:04.502 [2024-12-05 17:16:38.684724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.502 [2024-12-05 17:16:38.684748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.502 [2024-12-05 17:16:38.684759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:04.502 [2024-12-05 17:16:38.684765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:04.502 [2024-12-05 17:16:38.684771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.502 [2024-12-05 17:16:38.684787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.502 [2024-12-05 17:16:38.684793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:04.502 [2024-12-05 17:16:38.684799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:04.502 [2024-12-05 17:16:38.684805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.502 [2024-12-05 17:16:38.684855] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.290 ms, result 0 00:30:04.502 true 00:30:04.502 17:16:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:04.502 { 00:30:04.502 "name": "ftl", 00:30:04.502 "properties": [ 00:30:04.502 { 00:30:04.502 "name": "superblock_version", 00:30:04.502 "value": 5, 00:30:04.502 "read-only": true 00:30:04.502 }, 00:30:04.502 { 00:30:04.502 "name": "base_device", 00:30:04.502 "bands": [ 00:30:04.502 { 00:30:04.502 "id": 0, 00:30:04.502 "state": "FREE", 00:30:04.502 "validity": 0.0 00:30:04.502 }, 00:30:04.502 { 00:30:04.502 "id": 1, 00:30:04.502 "state": "FREE", 00:30:04.502 "validity": 0.0 00:30:04.502 }, 00:30:04.502 { 00:30:04.502 "id": 2, 00:30:04.502 "state": "FREE", 00:30:04.502 "validity": 0.0 00:30:04.502 }, 00:30:04.502 { 00:30:04.502 "id": 3, 00:30:04.502 "state": "FREE", 00:30:04.502 "validity": 0.0 00:30:04.502 }, 00:30:04.502 { 00:30:04.502 "id": 4, 00:30:04.502 "state": "FREE", 00:30:04.502 "validity": 0.0 00:30:04.502 }, 00:30:04.502 { 00:30:04.502 "id": 5, 00:30:04.502 "state": "FREE", 00:30:04.502 "validity": 0.0 00:30:04.502 }, 00:30:04.502 { 00:30:04.502 "id": 6, 00:30:04.502 "state": "FREE", 00:30:04.502 "validity": 0.0 00:30:04.502 }, 00:30:04.502 { 00:30:04.502 "id": 7, 00:30:04.502 "state": "FREE", 00:30:04.502 "validity": 0.0 00:30:04.502 }, 00:30:04.502 { 00:30:04.502 "id": 8, 00:30:04.502 "state": "FREE", 00:30:04.502 "validity": 0.0 00:30:04.502 }, 00:30:04.502 { 00:30:04.502 "id": 9, 00:30:04.502 "state": "FREE", 00:30:04.502 "validity": 0.0 00:30:04.502 }, 00:30:04.502 { 00:30:04.502 "id": 10, 00:30:04.502 "state": "FREE", 00:30:04.502 "validity": 0.0 00:30:04.502 }, 00:30:04.502 { 00:30:04.502 "id": 11, 00:30:04.502 "state": "FREE", 00:30:04.502 "validity": 0.0 00:30:04.502 }, 00:30:04.502 { 00:30:04.502 "id": 12, 00:30:04.502 "state": "FREE", 00:30:04.502 "validity": 0.0 00:30:04.502 }, 00:30:04.502 { 00:30:04.502 "id": 13, 00:30:04.502 "state": "FREE", 00:30:04.502 "validity": 0.0 00:30:04.502 }, 00:30:04.502 { 00:30:04.502 "id": 14, 00:30:04.502 "state": "FREE", 00:30:04.502 "validity": 0.0 00:30:04.502 }, 00:30:04.502 { 00:30:04.502 "id": 15, 00:30:04.502 "state": "FREE", 00:30:04.502 "validity": 0.0 00:30:04.502 }, 00:30:04.502 { 00:30:04.502 "id": 16, 00:30:04.502 "state": "FREE", 00:30:04.502 "validity": 0.0 00:30:04.502 }, 00:30:04.502 { 00:30:04.502 "id": 17, 00:30:04.502 "state": "FREE", 00:30:04.502 "validity": 0.0 00:30:04.502 } 00:30:04.502 ], 00:30:04.502 "read-only": true 00:30:04.502 }, 00:30:04.502 { 00:30:04.502 "name": "cache_device", 00:30:04.502 "type": "bdev", 00:30:04.502 "chunks": [ 00:30:04.502 { 00:30:04.502 "id": 0, 00:30:04.502 "state": "INACTIVE", 00:30:04.502 "utilization": 0.0 00:30:04.502 }, 00:30:04.502 { 00:30:04.502 "id": 1, 00:30:04.502 "state": "CLOSED", 00:30:04.502 "utilization": 1.0 00:30:04.502 }, 00:30:04.502 { 00:30:04.502 "id": 2, 00:30:04.502 "state": "CLOSED", 00:30:04.502 "utilization": 1.0 00:30:04.502 }, 00:30:04.502 { 00:30:04.502 "id": 3, 00:30:04.502 "state": "OPEN", 00:30:04.502 "utilization": 0.001953125 00:30:04.502 }, 00:30:04.502 { 00:30:04.502 "id": 4, 00:30:04.502 "state": "OPEN", 00:30:04.502 "utilization": 0.0 00:30:04.502 } 00:30:04.502 ], 00:30:04.502 "read-only": true 00:30:04.502 }, 00:30:04.502 { 00:30:04.502 "name": "verbose_mode", 00:30:04.502 "value": true, 00:30:04.502 "unit": "", 00:30:04.502 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:04.502 }, 00:30:04.502 { 00:30:04.502 "name": "prep_upgrade_on_shutdown", 00:30:04.502 "value": false, 00:30:04.502 "unit": "", 00:30:04.502 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:04.502 } 00:30:04.502 ] 00:30:04.502 } 00:30:04.502 17:16:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:04.763 [2024-12-05 17:16:39.000795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.763 [2024-12-05 17:16:39.000827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:04.763 [2024-12-05 17:16:39.000835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:04.763 [2024-12-05 17:16:39.000841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.763 [2024-12-05 17:16:39.000858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.763 [2024-12-05 17:16:39.000864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:04.763 [2024-12-05 17:16:39.000870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:04.763 [2024-12-05 17:16:39.000875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.763 [2024-12-05 17:16:39.000889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:04.763 [2024-12-05 17:16:39.000896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:04.763 [2024-12-05 17:16:39.000901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:04.763 [2024-12-05 17:16:39.000907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:04.763 [2024-12-05 17:16:39.000961] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.142 ms, result 0 00:30:04.763 true 00:30:04.763 17:16:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:04.763 17:16:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:04.763 17:16:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:05.023 17:16:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:05.023 17:16:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:05.023 17:16:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:05.284 [2024-12-05 17:16:39.421371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:05.284 [2024-12-05 17:16:39.421438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:05.284 [2024-12-05 17:16:39.421453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:05.284 [2024-12-05 17:16:39.421462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:05.284 [2024-12-05 17:16:39.421488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:05.284 [2024-12-05 17:16:39.421497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:05.284 [2024-12-05 17:16:39.421507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:05.284 [2024-12-05 17:16:39.421516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:05.284 [2024-12-05 17:16:39.421537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:05.284 [2024-12-05 17:16:39.421546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:05.284 [2024-12-05 17:16:39.421556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:05.284 [2024-12-05 17:16:39.421564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:05.284 [2024-12-05 17:16:39.421628] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.249 ms, result 0 00:30:05.284 true 00:30:05.284 17:16:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:05.284 { 00:30:05.284 "name": "ftl", 00:30:05.284 "properties": [ 00:30:05.284 { 00:30:05.284 "name": "superblock_version", 00:30:05.284 "value": 5, 00:30:05.284 "read-only": true 00:30:05.284 }, 00:30:05.284 { 00:30:05.284 "name": "base_device", 00:30:05.284 "bands": [ 00:30:05.284 { 00:30:05.284 "id": 0, 00:30:05.284 "state": "FREE", 00:30:05.284 "validity": 0.0 00:30:05.284 }, 00:30:05.284 { 00:30:05.284 "id": 1, 00:30:05.284 "state": "FREE", 00:30:05.284 "validity": 0.0 00:30:05.284 }, 00:30:05.284 { 00:30:05.284 "id": 2, 00:30:05.284 "state": "FREE", 00:30:05.284 "validity": 0.0 00:30:05.284 }, 00:30:05.284 { 00:30:05.284 "id": 3, 00:30:05.284 "state": "FREE", 00:30:05.284 "validity": 0.0 00:30:05.284 }, 00:30:05.284 { 00:30:05.284 "id": 4, 00:30:05.284 "state": "FREE", 00:30:05.284 "validity": 0.0 00:30:05.284 }, 00:30:05.284 { 00:30:05.284 "id": 5, 00:30:05.284 "state": "FREE", 00:30:05.284 "validity": 0.0 00:30:05.284 }, 00:30:05.284 { 00:30:05.284 "id": 6, 00:30:05.284 "state": "FREE", 00:30:05.284 "validity": 0.0 00:30:05.284 }, 00:30:05.284 { 00:30:05.284 "id": 7, 00:30:05.284 "state": "FREE", 00:30:05.284 "validity": 0.0 00:30:05.284 }, 00:30:05.284 { 00:30:05.284 "id": 8, 00:30:05.284 "state": "FREE", 00:30:05.284 "validity": 0.0 00:30:05.284 }, 00:30:05.284 { 00:30:05.284 "id": 9, 00:30:05.284 "state": "FREE", 00:30:05.284 "validity": 0.0 00:30:05.284 }, 00:30:05.284 { 00:30:05.284 "id": 10, 00:30:05.284 "state": "FREE", 00:30:05.284 "validity": 0.0 00:30:05.284 }, 00:30:05.284 { 00:30:05.284 "id": 11, 00:30:05.284 "state": "FREE", 00:30:05.284 "validity": 0.0 00:30:05.284 }, 00:30:05.284 { 00:30:05.284 "id": 12, 00:30:05.284 "state": "FREE", 00:30:05.284 "validity": 0.0 00:30:05.284 }, 00:30:05.284 { 00:30:05.284 "id": 13, 00:30:05.284 "state": "FREE", 00:30:05.284 "validity": 0.0 00:30:05.284 }, 00:30:05.284 { 00:30:05.284 "id": 14, 00:30:05.284 "state": "FREE", 00:30:05.284 "validity": 0.0 00:30:05.284 }, 00:30:05.284 { 00:30:05.284 "id": 15, 00:30:05.284 "state": "FREE", 00:30:05.284 "validity": 0.0 00:30:05.284 }, 00:30:05.284 { 00:30:05.284 "id": 16, 00:30:05.284 "state": "FREE", 00:30:05.284 "validity": 0.0 00:30:05.284 }, 00:30:05.284 { 00:30:05.284 "id": 17, 00:30:05.284 "state": "FREE", 00:30:05.284 "validity": 0.0 00:30:05.284 } 00:30:05.284 ], 00:30:05.284 "read-only": true 00:30:05.284 }, 00:30:05.284 { 00:30:05.284 "name": "cache_device", 00:30:05.284 "type": "bdev", 00:30:05.284 "chunks": [ 00:30:05.284 { 00:30:05.284 "id": 0, 00:30:05.284 "state": "INACTIVE", 00:30:05.284 "utilization": 0.0 00:30:05.284 }, 00:30:05.284 { 00:30:05.284 "id": 1, 00:30:05.284 "state": "CLOSED", 00:30:05.284 "utilization": 1.0 00:30:05.284 }, 00:30:05.284 { 00:30:05.284 "id": 2, 00:30:05.284 "state": "CLOSED", 00:30:05.284 "utilization": 1.0 00:30:05.284 }, 00:30:05.284 { 00:30:05.284 "id": 3, 00:30:05.284 "state": "OPEN", 00:30:05.284 "utilization": 0.001953125 00:30:05.284 }, 00:30:05.284 { 00:30:05.284 "id": 4, 00:30:05.284 "state": "OPEN", 00:30:05.284 "utilization": 0.0 00:30:05.284 } 00:30:05.284 ], 00:30:05.284 "read-only": true 00:30:05.284 }, 00:30:05.284 { 00:30:05.284 "name": "verbose_mode", 00:30:05.284 "value": true, 00:30:05.284 "unit": "", 00:30:05.285 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:05.285 }, 00:30:05.285 { 00:30:05.285 "name": "prep_upgrade_on_shutdown", 00:30:05.285 "value": true, 00:30:05.285 "unit": "", 00:30:05.285 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:05.285 } 00:30:05.285 ] 00:30:05.285 } 00:30:05.545 17:16:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:05.545 17:16:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82548 ]] 00:30:05.545 17:16:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82548 00:30:05.545 17:16:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82548 ']' 00:30:05.545 17:16:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82548 00:30:05.545 17:16:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:05.545 17:16:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:05.545 17:16:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82548 00:30:05.545 17:16:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:05.545 17:16:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:05.545 17:16:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82548' 00:30:05.545 killing process with pid 82548 00:30:05.545 17:16:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82548 00:30:05.545 17:16:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82548 00:30:06.128 [2024-12-05 17:16:40.460488] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:06.128 [2024-12-05 17:16:40.476455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.128 [2024-12-05 17:16:40.476671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:06.128 [2024-12-05 17:16:40.476761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:06.128 [2024-12-05 17:16:40.476787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:06.128 [2024-12-05 17:16:40.476834] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:06.128 [2024-12-05 17:16:40.479957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:06.128 [2024-12-05 17:16:40.480115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:06.128 [2024-12-05 17:16:40.480188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.073 ms 00:30:06.128 [2024-12-05 17:16:40.480219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.124 [2024-12-05 17:16:49.417737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.124 [2024-12-05 17:16:49.417883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:16.124 [2024-12-05 17:16:49.417938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8937.443 ms 00:30:16.124 [2024-12-05 17:16:49.417973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.124 [2024-12-05 17:16:49.419034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.124 [2024-12-05 17:16:49.419111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:16.124 [2024-12-05 17:16:49.419122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.035 ms 00:30:16.124 [2024-12-05 17:16:49.419128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.124 [2024-12-05 17:16:49.419982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.124 [2024-12-05 17:16:49.419996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:16.124 [2024-12-05 17:16:49.420004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.834 ms 00:30:16.124 [2024-12-05 17:16:49.420015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.124 [2024-12-05 17:16:49.427723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.124 [2024-12-05 17:16:49.427750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:16.124 [2024-12-05 17:16:49.427757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.682 ms 00:30:16.124 [2024-12-05 17:16:49.427763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.124 [2024-12-05 17:16:49.433450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.124 [2024-12-05 17:16:49.433476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:16.124 [2024-12-05 17:16:49.433484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.662 ms 00:30:16.124 [2024-12-05 17:16:49.433490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.124 [2024-12-05 17:16:49.433552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.124 [2024-12-05 17:16:49.433563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:16.124 [2024-12-05 17:16:49.433569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:30:16.124 [2024-12-05 17:16:49.433575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.124 [2024-12-05 17:16:49.440733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.124 [2024-12-05 17:16:49.440758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:16.124 [2024-12-05 17:16:49.440765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.147 ms 00:30:16.124 [2024-12-05 17:16:49.440770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.124 [2024-12-05 17:16:49.448647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.124 [2024-12-05 17:16:49.448673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:16.124 [2024-12-05 17:16:49.448680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.854 ms 00:30:16.124 [2024-12-05 17:16:49.448692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.124 [2024-12-05 17:16:49.455692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.124 [2024-12-05 17:16:49.455714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:16.124 [2024-12-05 17:16:49.455720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.976 ms 00:30:16.125 [2024-12-05 17:16:49.455725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.125 [2024-12-05 17:16:49.463203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.125 [2024-12-05 17:16:49.463227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:16.125 [2024-12-05 17:16:49.463234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.432 ms 00:30:16.125 [2024-12-05 17:16:49.463239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.125 [2024-12-05 17:16:49.463264] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:16.125 [2024-12-05 17:16:49.463280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:16.125 [2024-12-05 17:16:49.463288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:16.125 [2024-12-05 17:16:49.463294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:16.125 [2024-12-05 17:16:49.463300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:16.125 [2024-12-05 17:16:49.463306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:16.125 [2024-12-05 17:16:49.463311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:16.125 [2024-12-05 17:16:49.463317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:16.125 [2024-12-05 17:16:49.463323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:16.125 [2024-12-05 17:16:49.463329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:16.125 [2024-12-05 17:16:49.463334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:16.125 [2024-12-05 17:16:49.463340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:16.125 [2024-12-05 17:16:49.463345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:16.125 [2024-12-05 17:16:49.463351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:16.125 [2024-12-05 17:16:49.463356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:16.125 [2024-12-05 17:16:49.463362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:16.125 [2024-12-05 17:16:49.463367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:16.125 [2024-12-05 17:16:49.463373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:16.125 [2024-12-05 17:16:49.463378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:16.125 [2024-12-05 17:16:49.463385] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:16.125 [2024-12-05 17:16:49.463391] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 8a3521ab-2d4e-46df-a40d-788f9cd87bb7 00:30:16.125 [2024-12-05 17:16:49.463397] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:16.125 [2024-12-05 17:16:49.463403] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:30:16.125 [2024-12-05 17:16:49.463408] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:30:16.125 [2024-12-05 17:16:49.463413] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:30:16.125 [2024-12-05 17:16:49.463420] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:16.125 [2024-12-05 17:16:49.463426] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:16.125 [2024-12-05 17:16:49.463433] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:16.125 [2024-12-05 17:16:49.463438] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:16.125 [2024-12-05 17:16:49.463443] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:16.125 [2024-12-05 17:16:49.463449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.125 [2024-12-05 17:16:49.463456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:16.125 [2024-12-05 17:16:49.463463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.186 ms 00:30:16.125 [2024-12-05 17:16:49.463468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.125 [2024-12-05 17:16:49.473043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.125 [2024-12-05 17:16:49.473069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:16.125 [2024-12-05 17:16:49.473082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.556 ms 00:30:16.125 [2024-12-05 17:16:49.473088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.125 [2024-12-05 17:16:49.473354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:16.125 [2024-12-05 17:16:49.473365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:16.125 [2024-12-05 17:16:49.473373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.252 ms 00:30:16.125 [2024-12-05 17:16:49.473379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.125 [2024-12-05 17:16:49.506125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:16.125 [2024-12-05 17:16:49.506233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:16.125 [2024-12-05 17:16:49.506245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:16.125 [2024-12-05 17:16:49.506251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.125 [2024-12-05 17:16:49.506272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:16.125 [2024-12-05 17:16:49.506278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:16.125 [2024-12-05 17:16:49.506284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:16.125 [2024-12-05 17:16:49.506290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.125 [2024-12-05 17:16:49.506341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:16.125 [2024-12-05 17:16:49.506350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:16.125 [2024-12-05 17:16:49.506359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:16.125 [2024-12-05 17:16:49.506365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.125 [2024-12-05 17:16:49.506376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:16.125 [2024-12-05 17:16:49.506382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:16.125 [2024-12-05 17:16:49.506388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:16.125 [2024-12-05 17:16:49.506394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.125 [2024-12-05 17:16:49.564427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:16.125 [2024-12-05 17:16:49.564458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:16.125 [2024-12-05 17:16:49.564470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:16.125 [2024-12-05 17:16:49.564476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.125 [2024-12-05 17:16:49.612077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:16.125 [2024-12-05 17:16:49.612106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:16.125 [2024-12-05 17:16:49.612114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:16.125 [2024-12-05 17:16:49.612120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.125 [2024-12-05 17:16:49.612179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:16.125 [2024-12-05 17:16:49.612187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:16.125 [2024-12-05 17:16:49.612194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:16.125 [2024-12-05 17:16:49.612203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.125 [2024-12-05 17:16:49.612234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:16.125 [2024-12-05 17:16:49.612241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:16.125 [2024-12-05 17:16:49.612247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:16.125 [2024-12-05 17:16:49.612253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.125 [2024-12-05 17:16:49.612322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:16.125 [2024-12-05 17:16:49.612329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:16.125 [2024-12-05 17:16:49.612335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:16.125 [2024-12-05 17:16:49.612340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.125 [2024-12-05 17:16:49.612365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:16.125 [2024-12-05 17:16:49.612372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:16.125 [2024-12-05 17:16:49.612378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:16.125 [2024-12-05 17:16:49.612384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.125 [2024-12-05 17:16:49.612412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:16.125 [2024-12-05 17:16:49.612419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:16.125 [2024-12-05 17:16:49.612425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:16.125 [2024-12-05 17:16:49.612430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.125 [2024-12-05 17:16:49.612465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:16.125 [2024-12-05 17:16:49.612473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:16.125 [2024-12-05 17:16:49.612479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:16.125 [2024-12-05 17:16:49.612485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:16.125 [2024-12-05 17:16:49.612576] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9136.087 ms, result 0 00:30:19.427 17:16:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:19.427 17:16:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:30:19.427 17:16:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:19.427 17:16:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:19.427 17:16:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:19.427 17:16:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83093 00:30:19.427 17:16:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:19.427 17:16:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:19.427 17:16:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83093 00:30:19.427 17:16:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83093 ']' 00:30:19.427 17:16:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:19.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:19.427 17:16:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:19.427 17:16:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:19.427 17:16:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:19.427 17:16:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:19.427 [2024-12-05 17:16:53.423946] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:19.427 [2024-12-05 17:16:53.424076] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83093 ] 00:30:19.427 [2024-12-05 17:16:53.582101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:19.427 [2024-12-05 17:16:53.662662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:19.999 [2024-12-05 17:16:54.230769] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:19.999 [2024-12-05 17:16:54.230826] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:20.262 [2024-12-05 17:16:54.373553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.262 [2024-12-05 17:16:54.373587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:20.262 [2024-12-05 17:16:54.373597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:20.262 [2024-12-05 17:16:54.373603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.262 [2024-12-05 17:16:54.373641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.262 [2024-12-05 17:16:54.373649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:20.262 [2024-12-05 17:16:54.373655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:30:20.262 [2024-12-05 17:16:54.373661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.262 [2024-12-05 17:16:54.373677] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:20.262 [2024-12-05 17:16:54.374269] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:20.262 [2024-12-05 17:16:54.374287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.262 [2024-12-05 17:16:54.374293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:20.262 [2024-12-05 17:16:54.374300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.616 ms 00:30:20.262 [2024-12-05 17:16:54.374305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.262 [2024-12-05 17:16:54.375258] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:20.262 [2024-12-05 17:16:54.384966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.262 [2024-12-05 17:16:54.384992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:20.262 [2024-12-05 17:16:54.385004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.709 ms 00:30:20.262 [2024-12-05 17:16:54.385010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.263 [2024-12-05 17:16:54.385053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.263 [2024-12-05 17:16:54.385060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:20.263 [2024-12-05 17:16:54.385067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:20.263 [2024-12-05 17:16:54.385072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.263 [2024-12-05 17:16:54.389332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.263 [2024-12-05 17:16:54.389355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:20.263 [2024-12-05 17:16:54.389362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.214 ms 00:30:20.263 [2024-12-05 17:16:54.389368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.263 [2024-12-05 17:16:54.389409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.263 [2024-12-05 17:16:54.389415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:20.263 [2024-12-05 17:16:54.389422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:30:20.263 [2024-12-05 17:16:54.389427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.263 [2024-12-05 17:16:54.389459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.263 [2024-12-05 17:16:54.389468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:20.263 [2024-12-05 17:16:54.389474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:20.263 [2024-12-05 17:16:54.389480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.263 [2024-12-05 17:16:54.389495] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:20.263 [2024-12-05 17:16:54.392233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.263 [2024-12-05 17:16:54.392255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:20.263 [2024-12-05 17:16:54.392262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.742 ms 00:30:20.263 [2024-12-05 17:16:54.392270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.263 [2024-12-05 17:16:54.392293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.263 [2024-12-05 17:16:54.392300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:20.263 [2024-12-05 17:16:54.392306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:20.263 [2024-12-05 17:16:54.392311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.263 [2024-12-05 17:16:54.392327] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:20.263 [2024-12-05 17:16:54.392344] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:20.263 [2024-12-05 17:16:54.392369] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:20.263 [2024-12-05 17:16:54.392380] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:20.263 [2024-12-05 17:16:54.392458] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:20.263 [2024-12-05 17:16:54.392466] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:20.263 [2024-12-05 17:16:54.392474] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:20.263 [2024-12-05 17:16:54.392482] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:20.263 [2024-12-05 17:16:54.392489] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:20.263 [2024-12-05 17:16:54.392497] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:20.263 [2024-12-05 17:16:54.392503] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:20.263 [2024-12-05 17:16:54.392509] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:20.263 [2024-12-05 17:16:54.392514] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:20.263 [2024-12-05 17:16:54.392520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.263 [2024-12-05 17:16:54.392526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:20.263 [2024-12-05 17:16:54.392532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.195 ms 00:30:20.263 [2024-12-05 17:16:54.392537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.263 [2024-12-05 17:16:54.392602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.263 [2024-12-05 17:16:54.392608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:20.263 [2024-12-05 17:16:54.392616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:30:20.263 [2024-12-05 17:16:54.392622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.263 [2024-12-05 17:16:54.392705] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:20.263 [2024-12-05 17:16:54.392713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:20.263 [2024-12-05 17:16:54.392719] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:20.263 [2024-12-05 17:16:54.392724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:20.263 [2024-12-05 17:16:54.392730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:20.263 [2024-12-05 17:16:54.392735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:20.263 [2024-12-05 17:16:54.392741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:20.263 [2024-12-05 17:16:54.392746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:20.263 [2024-12-05 17:16:54.392752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:20.263 [2024-12-05 17:16:54.392758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:20.263 [2024-12-05 17:16:54.392763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:20.263 [2024-12-05 17:16:54.392768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:20.263 [2024-12-05 17:16:54.392773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:20.263 [2024-12-05 17:16:54.392780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:20.263 [2024-12-05 17:16:54.392785] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:20.263 [2024-12-05 17:16:54.392789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:20.263 [2024-12-05 17:16:54.392795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:20.263 [2024-12-05 17:16:54.392799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:20.263 [2024-12-05 17:16:54.392804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:20.263 [2024-12-05 17:16:54.392809] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:20.263 [2024-12-05 17:16:54.392815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:20.263 [2024-12-05 17:16:54.392820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:20.263 [2024-12-05 17:16:54.392825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:20.263 [2024-12-05 17:16:54.392834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:20.263 [2024-12-05 17:16:54.392839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:20.263 [2024-12-05 17:16:54.392844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:20.263 [2024-12-05 17:16:54.392849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:20.263 [2024-12-05 17:16:54.392854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:20.263 [2024-12-05 17:16:54.392859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:20.263 [2024-12-05 17:16:54.392863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:20.263 [2024-12-05 17:16:54.392868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:20.263 [2024-12-05 17:16:54.392874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:20.263 [2024-12-05 17:16:54.392878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:20.263 [2024-12-05 17:16:54.392884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:20.263 [2024-12-05 17:16:54.392888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:20.263 [2024-12-05 17:16:54.392893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:20.263 [2024-12-05 17:16:54.392898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:20.263 [2024-12-05 17:16:54.392903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:20.263 [2024-12-05 17:16:54.392908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:20.263 [2024-12-05 17:16:54.392913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:20.263 [2024-12-05 17:16:54.392918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:20.263 [2024-12-05 17:16:54.392923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:20.263 [2024-12-05 17:16:54.392927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:20.263 [2024-12-05 17:16:54.392932] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:20.263 [2024-12-05 17:16:54.392938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:20.263 [2024-12-05 17:16:54.392944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:20.263 [2024-12-05 17:16:54.392966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:20.263 [2024-12-05 17:16:54.393018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:20.263 [2024-12-05 17:16:54.393023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:20.263 [2024-12-05 17:16:54.393029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:20.263 [2024-12-05 17:16:54.393034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:20.263 [2024-12-05 17:16:54.393039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:20.264 [2024-12-05 17:16:54.393044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:20.264 [2024-12-05 17:16:54.393051] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:20.264 [2024-12-05 17:16:54.393057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:20.264 [2024-12-05 17:16:54.393064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:20.264 [2024-12-05 17:16:54.393070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:20.264 [2024-12-05 17:16:54.393075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:20.264 [2024-12-05 17:16:54.393081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:20.264 [2024-12-05 17:16:54.393086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:20.264 [2024-12-05 17:16:54.393092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:20.264 [2024-12-05 17:16:54.393097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:20.264 [2024-12-05 17:16:54.393102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:20.264 [2024-12-05 17:16:54.393108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:20.264 [2024-12-05 17:16:54.393113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:20.264 [2024-12-05 17:16:54.393118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:20.264 [2024-12-05 17:16:54.393124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:20.264 [2024-12-05 17:16:54.393130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:20.264 [2024-12-05 17:16:54.393136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:20.264 [2024-12-05 17:16:54.393141] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:20.264 [2024-12-05 17:16:54.393147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:20.264 [2024-12-05 17:16:54.393153] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:20.264 [2024-12-05 17:16:54.393159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:20.264 [2024-12-05 17:16:54.393165] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:20.264 [2024-12-05 17:16:54.393171] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:20.264 [2024-12-05 17:16:54.393176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:20.264 [2024-12-05 17:16:54.393182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:20.264 [2024-12-05 17:16:54.393188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.533 ms 00:30:20.264 [2024-12-05 17:16:54.393194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:20.264 [2024-12-05 17:16:54.393226] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:20.264 [2024-12-05 17:16:54.393233] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:25.561 [2024-12-05 17:16:59.741402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.561 [2024-12-05 17:16:59.741752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:25.561 [2024-12-05 17:16:59.741901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5348.155 ms 00:30:25.561 [2024-12-05 17:16:59.741932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.561 [2024-12-05 17:16:59.774102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.561 [2024-12-05 17:16:59.774327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:25.561 [2024-12-05 17:16:59.774397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.879 ms 00:30:25.561 [2024-12-05 17:16:59.774422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.561 [2024-12-05 17:16:59.774541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.561 [2024-12-05 17:16:59.774577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:25.561 [2024-12-05 17:16:59.774599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:30:25.561 [2024-12-05 17:16:59.774676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.561 [2024-12-05 17:16:59.810214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.561 [2024-12-05 17:16:59.810418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:25.561 [2024-12-05 17:16:59.810495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.447 ms 00:30:25.561 [2024-12-05 17:16:59.810519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.561 [2024-12-05 17:16:59.810580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.561 [2024-12-05 17:16:59.810604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:25.561 [2024-12-05 17:16:59.810624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:25.561 [2024-12-05 17:16:59.810644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.561 [2024-12-05 17:16:59.811267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.561 [2024-12-05 17:16:59.811429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:25.561 [2024-12-05 17:16:59.811501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.554 ms 00:30:25.561 [2024-12-05 17:16:59.811525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.561 [2024-12-05 17:16:59.811608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.561 [2024-12-05 17:16:59.811630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:25.561 [2024-12-05 17:16:59.811651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:30:25.561 [2024-12-05 17:16:59.811669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.561 [2024-12-05 17:16:59.829559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.561 [2024-12-05 17:16:59.829737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:25.561 [2024-12-05 17:16:59.829807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.808 ms 00:30:25.561 [2024-12-05 17:16:59.829830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.561 [2024-12-05 17:16:59.861931] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:25.561 [2024-12-05 17:16:59.862149] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:25.561 [2024-12-05 17:16:59.862174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.561 [2024-12-05 17:16:59.862184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:25.561 [2024-12-05 17:16:59.862195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.188 ms 00:30:25.561 [2024-12-05 17:16:59.862205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.561 [2024-12-05 17:16:59.877396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.561 [2024-12-05 17:16:59.877446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:25.562 [2024-12-05 17:16:59.877460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.139 ms 00:30:25.562 [2024-12-05 17:16:59.877468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.562 [2024-12-05 17:16:59.889967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.562 [2024-12-05 17:16:59.890027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:25.562 [2024-12-05 17:16:59.890040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.440 ms 00:30:25.562 [2024-12-05 17:16:59.890047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.562 [2024-12-05 17:16:59.902842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.562 [2024-12-05 17:16:59.902892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:25.562 [2024-12-05 17:16:59.902903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.744 ms 00:30:25.562 [2024-12-05 17:16:59.902911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.562 [2024-12-05 17:16:59.903590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.562 [2024-12-05 17:16:59.903626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:25.562 [2024-12-05 17:16:59.903637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.542 ms 00:30:25.562 [2024-12-05 17:16:59.903645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.823 [2024-12-05 17:16:59.976867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.823 [2024-12-05 17:16:59.976937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:25.823 [2024-12-05 17:16:59.976971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 73.200 ms 00:30:25.823 [2024-12-05 17:16:59.976982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.823 [2024-12-05 17:16:59.988586] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:25.823 [2024-12-05 17:16:59.989743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.823 [2024-12-05 17:16:59.989791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:25.823 [2024-12-05 17:16:59.989804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.699 ms 00:30:25.823 [2024-12-05 17:16:59.989813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.823 [2024-12-05 17:16:59.989898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.823 [2024-12-05 17:16:59.989913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:25.823 [2024-12-05 17:16:59.989923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:25.823 [2024-12-05 17:16:59.989930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.823 [2024-12-05 17:16:59.990025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.823 [2024-12-05 17:16:59.990039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:25.823 [2024-12-05 17:16:59.990048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:30:25.823 [2024-12-05 17:16:59.990056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.823 [2024-12-05 17:16:59.990082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.823 [2024-12-05 17:16:59.990092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:25.823 [2024-12-05 17:16:59.990105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:25.823 [2024-12-05 17:16:59.990113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.823 [2024-12-05 17:16:59.990147] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:25.823 [2024-12-05 17:16:59.990158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.823 [2024-12-05 17:16:59.990166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:25.823 [2024-12-05 17:16:59.990175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:30:25.823 [2024-12-05 17:16:59.990184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.823 [2024-12-05 17:17:00.022668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.823 [2024-12-05 17:17:00.022753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:25.823 [2024-12-05 17:17:00.022784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.457 ms 00:30:25.823 [2024-12-05 17:17:00.022799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.823 [2024-12-05 17:17:00.022928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.823 [2024-12-05 17:17:00.022974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:25.823 [2024-12-05 17:17:00.023008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:30:25.823 [2024-12-05 17:17:00.023020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.823 [2024-12-05 17:17:00.024922] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 5650.657 ms, result 0 00:30:25.823 [2024-12-05 17:17:00.039184] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:25.823 [2024-12-05 17:17:00.055205] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:25.823 [2024-12-05 17:17:00.063781] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:25.823 17:17:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:25.823 17:17:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:25.823 17:17:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:25.823 17:17:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:25.823 17:17:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:26.084 [2024-12-05 17:17:00.307871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.084 [2024-12-05 17:17:00.307940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:26.084 [2024-12-05 17:17:00.307980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:26.084 [2024-12-05 17:17:00.307990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.084 [2024-12-05 17:17:00.308019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.084 [2024-12-05 17:17:00.308028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:26.084 [2024-12-05 17:17:00.308037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:26.084 [2024-12-05 17:17:00.308045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.084 [2024-12-05 17:17:00.308067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.084 [2024-12-05 17:17:00.308076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:26.084 [2024-12-05 17:17:00.308086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:26.084 [2024-12-05 17:17:00.308094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.084 [2024-12-05 17:17:00.308162] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.288 ms, result 0 00:30:26.084 true 00:30:26.084 17:17:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:26.345 { 00:30:26.345 "name": "ftl", 00:30:26.345 "properties": [ 00:30:26.345 { 00:30:26.345 "name": "superblock_version", 00:30:26.345 "value": 5, 00:30:26.345 "read-only": true 00:30:26.345 }, 00:30:26.345 { 00:30:26.345 "name": "base_device", 00:30:26.345 "bands": [ 00:30:26.345 { 00:30:26.345 "id": 0, 00:30:26.345 "state": "CLOSED", 00:30:26.345 "validity": 1.0 00:30:26.345 }, 00:30:26.345 { 00:30:26.345 "id": 1, 00:30:26.345 "state": "CLOSED", 00:30:26.345 "validity": 1.0 00:30:26.345 }, 00:30:26.345 { 00:30:26.345 "id": 2, 00:30:26.345 "state": "CLOSED", 00:30:26.345 "validity": 0.007843137254901933 00:30:26.345 }, 00:30:26.345 { 00:30:26.345 "id": 3, 00:30:26.345 "state": "FREE", 00:30:26.345 "validity": 0.0 00:30:26.345 }, 00:30:26.345 { 00:30:26.345 "id": 4, 00:30:26.345 "state": "FREE", 00:30:26.345 "validity": 0.0 00:30:26.345 }, 00:30:26.345 { 00:30:26.345 "id": 5, 00:30:26.345 "state": "FREE", 00:30:26.345 "validity": 0.0 00:30:26.345 }, 00:30:26.345 { 00:30:26.345 "id": 6, 00:30:26.345 "state": "FREE", 00:30:26.345 "validity": 0.0 00:30:26.345 }, 00:30:26.345 { 00:30:26.345 "id": 7, 00:30:26.345 "state": "FREE", 00:30:26.345 "validity": 0.0 00:30:26.345 }, 00:30:26.345 { 00:30:26.345 "id": 8, 00:30:26.345 "state": "FREE", 00:30:26.345 "validity": 0.0 00:30:26.345 }, 00:30:26.345 { 00:30:26.345 "id": 9, 00:30:26.345 "state": "FREE", 00:30:26.345 "validity": 0.0 00:30:26.345 }, 00:30:26.345 { 00:30:26.345 "id": 10, 00:30:26.345 "state": "FREE", 00:30:26.345 "validity": 0.0 00:30:26.345 }, 00:30:26.345 { 00:30:26.345 "id": 11, 00:30:26.345 "state": "FREE", 00:30:26.345 "validity": 0.0 00:30:26.345 }, 00:30:26.345 { 00:30:26.345 "id": 12, 00:30:26.345 "state": "FREE", 00:30:26.345 "validity": 0.0 00:30:26.345 }, 00:30:26.345 { 00:30:26.345 "id": 13, 00:30:26.345 "state": "FREE", 00:30:26.345 "validity": 0.0 00:30:26.345 }, 00:30:26.345 { 00:30:26.345 "id": 14, 00:30:26.345 "state": "FREE", 00:30:26.345 "validity": 0.0 00:30:26.345 }, 00:30:26.345 { 00:30:26.345 "id": 15, 00:30:26.345 "state": "FREE", 00:30:26.345 "validity": 0.0 00:30:26.345 }, 00:30:26.345 { 00:30:26.345 "id": 16, 00:30:26.345 "state": "FREE", 00:30:26.345 "validity": 0.0 00:30:26.345 }, 00:30:26.345 { 00:30:26.345 "id": 17, 00:30:26.345 "state": "FREE", 00:30:26.345 "validity": 0.0 00:30:26.345 } 00:30:26.345 ], 00:30:26.345 "read-only": true 00:30:26.345 }, 00:30:26.345 { 00:30:26.345 "name": "cache_device", 00:30:26.345 "type": "bdev", 00:30:26.345 "chunks": [ 00:30:26.345 { 00:30:26.345 "id": 0, 00:30:26.345 "state": "INACTIVE", 00:30:26.345 "utilization": 0.0 00:30:26.345 }, 00:30:26.345 { 00:30:26.345 "id": 1, 00:30:26.345 "state": "OPEN", 00:30:26.345 "utilization": 0.0 00:30:26.345 }, 00:30:26.345 { 00:30:26.345 "id": 2, 00:30:26.345 "state": "OPEN", 00:30:26.345 "utilization": 0.0 00:30:26.345 }, 00:30:26.345 { 00:30:26.345 "id": 3, 00:30:26.345 "state": "FREE", 00:30:26.345 "utilization": 0.0 00:30:26.345 }, 00:30:26.345 { 00:30:26.345 "id": 4, 00:30:26.345 "state": "FREE", 00:30:26.345 "utilization": 0.0 00:30:26.345 } 00:30:26.345 ], 00:30:26.345 "read-only": true 00:30:26.345 }, 00:30:26.345 { 00:30:26.345 "name": "verbose_mode", 00:30:26.345 "value": true, 00:30:26.345 "unit": "", 00:30:26.345 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:26.345 }, 00:30:26.345 { 00:30:26.345 "name": "prep_upgrade_on_shutdown", 00:30:26.345 "value": false, 00:30:26.345 "unit": "", 00:30:26.345 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:26.345 } 00:30:26.345 ] 00:30:26.345 } 00:30:26.345 17:17:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:26.345 17:17:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:26.345 17:17:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:26.606 17:17:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:26.606 17:17:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:26.606 17:17:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:26.606 17:17:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:26.606 17:17:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:26.866 17:17:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:26.867 Validate MD5 checksum, iteration 1 00:30:26.867 17:17:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:26.867 17:17:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:26.867 17:17:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:26.867 17:17:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:26.867 17:17:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:26.867 17:17:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:26.867 17:17:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:26.867 17:17:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:26.867 17:17:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:26.867 17:17:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:26.867 17:17:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:26.867 17:17:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:26.867 [2024-12-05 17:17:01.090081] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:26.867 [2024-12-05 17:17:01.090225] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83192 ] 00:30:27.127 [2024-12-05 17:17:01.254436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:27.127 [2024-12-05 17:17:01.360419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:28.512  [2024-12-05T17:17:03.819Z] Copying: 593/1024 [MB] (593 MBps) [2024-12-05T17:17:04.762Z] Copying: 1024/1024 [MB] (average 617 MBps) 00:30:30.395 00:30:30.395 17:17:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:30.395 17:17:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:32.936 17:17:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:32.936 Validate MD5 checksum, iteration 2 00:30:32.936 17:17:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=8315964310dbe37cda9f52286ac6676a 00:30:32.936 17:17:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 8315964310dbe37cda9f52286ac6676a != \8\3\1\5\9\6\4\3\1\0\d\b\e\3\7\c\d\a\9\f\5\2\2\8\6\a\c\6\6\7\6\a ]] 00:30:32.936 17:17:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:32.936 17:17:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:32.936 17:17:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:32.936 17:17:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:32.936 17:17:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:32.936 17:17:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:32.936 17:17:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:32.936 17:17:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:32.936 17:17:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:32.936 [2024-12-05 17:17:06.807555] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:32.936 [2024-12-05 17:17:06.807672] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83253 ] 00:30:32.936 [2024-12-05 17:17:06.972501] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.936 [2024-12-05 17:17:07.065173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:34.321  [2024-12-05T17:17:09.257Z] Copying: 676/1024 [MB] (676 MBps) [2024-12-05T17:17:10.195Z] Copying: 1024/1024 [MB] (average 691 MBps) 00:30:35.828 00:30:35.828 17:17:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:35.828 17:17:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:37.741 17:17:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:37.741 17:17:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=8c3bb7b700d1b9aed43f8bc476c3c883 00:30:37.741 17:17:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 8c3bb7b700d1b9aed43f8bc476c3c883 != \8\c\3\b\b\7\b\7\0\0\d\1\b\9\a\e\d\4\3\f\8\b\c\4\7\6\c\3\c\8\8\3 ]] 00:30:37.741 17:17:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:37.741 17:17:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:37.741 17:17:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:30:37.741 17:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 83093 ]] 00:30:37.741 17:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 83093 00:30:37.741 17:17:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:30:37.741 17:17:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:30:37.741 17:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:37.741 17:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:37.741 17:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:37.741 17:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83313 00:30:37.741 17:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:37.741 17:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83313 00:30:37.741 17:17:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:37.741 17:17:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83313 ']' 00:30:37.741 17:17:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:37.741 17:17:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:37.741 17:17:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:37.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:37.741 17:17:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:37.741 17:17:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:37.741 [2024-12-05 17:17:12.065308] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:37.741 [2024-12-05 17:17:12.065397] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83313 ] 00:30:38.002 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 83093 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:30:38.002 [2024-12-05 17:17:12.214008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:38.002 [2024-12-05 17:17:12.289241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:38.572 [2024-12-05 17:17:12.856465] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:38.572 [2024-12-05 17:17:12.856522] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:38.835 [2024-12-05 17:17:12.999171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.835 [2024-12-05 17:17:12.999207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:38.835 [2024-12-05 17:17:12.999218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:38.835 [2024-12-05 17:17:12.999224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.835 [2024-12-05 17:17:12.999262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.835 [2024-12-05 17:17:12.999269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:38.835 [2024-12-05 17:17:12.999275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:30:38.835 [2024-12-05 17:17:12.999281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.835 [2024-12-05 17:17:12.999297] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:38.835 [2024-12-05 17:17:12.999833] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:38.835 [2024-12-05 17:17:12.999851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.835 [2024-12-05 17:17:12.999857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:38.835 [2024-12-05 17:17:12.999863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.559 ms 00:30:38.835 [2024-12-05 17:17:12.999869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.835 [2024-12-05 17:17:13.000095] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:38.835 [2024-12-05 17:17:13.012322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.835 [2024-12-05 17:17:13.012352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:38.835 [2024-12-05 17:17:13.012361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.227 ms 00:30:38.835 [2024-12-05 17:17:13.012368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.835 [2024-12-05 17:17:13.019233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.835 [2024-12-05 17:17:13.019261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:38.835 [2024-12-05 17:17:13.019268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:30:38.835 [2024-12-05 17:17:13.019274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.835 [2024-12-05 17:17:13.019506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.835 [2024-12-05 17:17:13.019521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:38.835 [2024-12-05 17:17:13.019528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.177 ms 00:30:38.835 [2024-12-05 17:17:13.019534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.835 [2024-12-05 17:17:13.019571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.835 [2024-12-05 17:17:13.019583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:38.835 [2024-12-05 17:17:13.019589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:30:38.835 [2024-12-05 17:17:13.019594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.835 [2024-12-05 17:17:13.019612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.835 [2024-12-05 17:17:13.019619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:38.835 [2024-12-05 17:17:13.019625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:38.835 [2024-12-05 17:17:13.019630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.835 [2024-12-05 17:17:13.019645] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:38.835 [2024-12-05 17:17:13.021931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.835 [2024-12-05 17:17:13.021970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:38.835 [2024-12-05 17:17:13.021978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.289 ms 00:30:38.835 [2024-12-05 17:17:13.021983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.835 [2024-12-05 17:17:13.022005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.835 [2024-12-05 17:17:13.022011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:38.835 [2024-12-05 17:17:13.022017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:38.835 [2024-12-05 17:17:13.022023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.835 [2024-12-05 17:17:13.022039] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:38.835 [2024-12-05 17:17:13.022053] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:38.835 [2024-12-05 17:17:13.022080] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:38.835 [2024-12-05 17:17:13.022093] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:38.835 [2024-12-05 17:17:13.022175] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:38.835 [2024-12-05 17:17:13.022183] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:38.835 [2024-12-05 17:17:13.022191] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:38.835 [2024-12-05 17:17:13.022198] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:38.835 [2024-12-05 17:17:13.022205] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:38.835 [2024-12-05 17:17:13.022211] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:38.835 [2024-12-05 17:17:13.022217] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:38.835 [2024-12-05 17:17:13.022222] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:38.835 [2024-12-05 17:17:13.022227] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:38.835 [2024-12-05 17:17:13.022235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.835 [2024-12-05 17:17:13.022241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:38.835 [2024-12-05 17:17:13.022246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.198 ms 00:30:38.835 [2024-12-05 17:17:13.022252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.835 [2024-12-05 17:17:13.022316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.835 [2024-12-05 17:17:13.022322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:38.835 [2024-12-05 17:17:13.022328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:30:38.835 [2024-12-05 17:17:13.022334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.835 [2024-12-05 17:17:13.022408] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:38.835 [2024-12-05 17:17:13.022422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:38.835 [2024-12-05 17:17:13.022428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:38.835 [2024-12-05 17:17:13.022434] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:38.835 [2024-12-05 17:17:13.022440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:38.835 [2024-12-05 17:17:13.022445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:38.835 [2024-12-05 17:17:13.022451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:38.835 [2024-12-05 17:17:13.022456] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:38.835 [2024-12-05 17:17:13.022462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:38.835 [2024-12-05 17:17:13.022467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:38.835 [2024-12-05 17:17:13.022472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:38.835 [2024-12-05 17:17:13.022477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:38.835 [2024-12-05 17:17:13.022482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:38.835 [2024-12-05 17:17:13.022490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:38.835 [2024-12-05 17:17:13.022495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:38.835 [2024-12-05 17:17:13.022500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:38.835 [2024-12-05 17:17:13.022505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:38.835 [2024-12-05 17:17:13.022509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:38.835 [2024-12-05 17:17:13.022515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:38.835 [2024-12-05 17:17:13.022520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:38.835 [2024-12-05 17:17:13.022525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:38.835 [2024-12-05 17:17:13.022533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:38.835 [2024-12-05 17:17:13.022539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:38.835 [2024-12-05 17:17:13.022544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:38.835 [2024-12-05 17:17:13.022549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:38.836 [2024-12-05 17:17:13.022554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:38.836 [2024-12-05 17:17:13.022559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:38.836 [2024-12-05 17:17:13.022564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:38.836 [2024-12-05 17:17:13.022569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:38.836 [2024-12-05 17:17:13.022573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:38.836 [2024-12-05 17:17:13.022578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:38.836 [2024-12-05 17:17:13.022583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:38.836 [2024-12-05 17:17:13.022588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:38.836 [2024-12-05 17:17:13.022592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:38.836 [2024-12-05 17:17:13.022597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:38.836 [2024-12-05 17:17:13.022602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:38.836 [2024-12-05 17:17:13.022607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:38.836 [2024-12-05 17:17:13.022612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:38.836 [2024-12-05 17:17:13.022617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:38.836 [2024-12-05 17:17:13.022622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:38.836 [2024-12-05 17:17:13.022626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:38.836 [2024-12-05 17:17:13.022632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:38.836 [2024-12-05 17:17:13.022636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:38.836 [2024-12-05 17:17:13.022641] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:38.836 [2024-12-05 17:17:13.022646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:38.836 [2024-12-05 17:17:13.022654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:38.836 [2024-12-05 17:17:13.022659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:38.836 [2024-12-05 17:17:13.022665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:38.836 [2024-12-05 17:17:13.022670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:38.836 [2024-12-05 17:17:13.022675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:38.836 [2024-12-05 17:17:13.022680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:38.836 [2024-12-05 17:17:13.022685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:38.836 [2024-12-05 17:17:13.022690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:38.836 [2024-12-05 17:17:13.022696] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:38.836 [2024-12-05 17:17:13.022703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:38.836 [2024-12-05 17:17:13.022709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:38.836 [2024-12-05 17:17:13.022715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:38.836 [2024-12-05 17:17:13.022720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:38.836 [2024-12-05 17:17:13.022725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:38.836 [2024-12-05 17:17:13.022730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:38.836 [2024-12-05 17:17:13.022736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:38.836 [2024-12-05 17:17:13.022741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:38.836 [2024-12-05 17:17:13.022746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:38.836 [2024-12-05 17:17:13.022751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:38.836 [2024-12-05 17:17:13.022757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:38.836 [2024-12-05 17:17:13.022762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:38.836 [2024-12-05 17:17:13.022767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:38.836 [2024-12-05 17:17:13.022772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:38.836 [2024-12-05 17:17:13.022778] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:38.836 [2024-12-05 17:17:13.022783] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:38.836 [2024-12-05 17:17:13.022789] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:38.836 [2024-12-05 17:17:13.022797] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:38.836 [2024-12-05 17:17:13.022802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:38.836 [2024-12-05 17:17:13.022808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:38.836 [2024-12-05 17:17:13.022813] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:38.836 [2024-12-05 17:17:13.022819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.836 [2024-12-05 17:17:13.022824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:38.836 [2024-12-05 17:17:13.022831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.463 ms 00:30:38.836 [2024-12-05 17:17:13.022836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.836 [2024-12-05 17:17:13.041548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.836 [2024-12-05 17:17:13.041572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:38.836 [2024-12-05 17:17:13.041580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.677 ms 00:30:38.836 [2024-12-05 17:17:13.041586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.836 [2024-12-05 17:17:13.041614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.836 [2024-12-05 17:17:13.041620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:38.836 [2024-12-05 17:17:13.041626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:38.836 [2024-12-05 17:17:13.041632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.836 [2024-12-05 17:17:13.065296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.836 [2024-12-05 17:17:13.065323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:38.836 [2024-12-05 17:17:13.065331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.625 ms 00:30:38.836 [2024-12-05 17:17:13.065336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.836 [2024-12-05 17:17:13.065357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.836 [2024-12-05 17:17:13.065363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:38.836 [2024-12-05 17:17:13.065369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:38.836 [2024-12-05 17:17:13.065377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.836 [2024-12-05 17:17:13.065444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.836 [2024-12-05 17:17:13.065452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:38.836 [2024-12-05 17:17:13.065459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:30:38.836 [2024-12-05 17:17:13.065464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.836 [2024-12-05 17:17:13.065494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.836 [2024-12-05 17:17:13.065500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:38.836 [2024-12-05 17:17:13.065506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:38.836 [2024-12-05 17:17:13.065511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.836 [2024-12-05 17:17:13.076833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.836 [2024-12-05 17:17:13.076858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:38.836 [2024-12-05 17:17:13.076866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.302 ms 00:30:38.836 [2024-12-05 17:17:13.076871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.836 [2024-12-05 17:17:13.076944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.836 [2024-12-05 17:17:13.076962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:30:38.836 [2024-12-05 17:17:13.076969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:38.836 [2024-12-05 17:17:13.076974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.836 [2024-12-05 17:17:13.110848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.836 [2024-12-05 17:17:13.110889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:30:38.836 [2024-12-05 17:17:13.110902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.859 ms 00:30:38.836 [2024-12-05 17:17:13.110910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.836 [2024-12-05 17:17:13.118448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.836 [2024-12-05 17:17:13.118475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:38.836 [2024-12-05 17:17:13.118489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.394 ms 00:30:38.836 [2024-12-05 17:17:13.118494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.836 [2024-12-05 17:17:13.162651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.836 [2024-12-05 17:17:13.162689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:38.836 [2024-12-05 17:17:13.162698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.114 ms 00:30:38.836 [2024-12-05 17:17:13.162705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.836 [2024-12-05 17:17:13.162808] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:30:38.836 [2024-12-05 17:17:13.162881] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:30:38.836 [2024-12-05 17:17:13.162965] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:30:38.836 [2024-12-05 17:17:13.163040] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:30:38.836 [2024-12-05 17:17:13.163047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.837 [2024-12-05 17:17:13.163053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:30:38.837 [2024-12-05 17:17:13.163060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.308 ms 00:30:38.837 [2024-12-05 17:17:13.163066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.837 [2024-12-05 17:17:13.163109] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:30:38.837 [2024-12-05 17:17:13.163118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.837 [2024-12-05 17:17:13.163127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:30:38.837 [2024-12-05 17:17:13.163133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:38.837 [2024-12-05 17:17:13.163139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.837 [2024-12-05 17:17:13.174656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.837 [2024-12-05 17:17:13.174688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:30:38.837 [2024-12-05 17:17:13.174696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.501 ms 00:30:38.837 [2024-12-05 17:17:13.174702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.837 [2024-12-05 17:17:13.181134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.837 [2024-12-05 17:17:13.181160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:30:38.837 [2024-12-05 17:17:13.181168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:38.837 [2024-12-05 17:17:13.181174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:38.837 [2024-12-05 17:17:13.181236] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:30:38.837 [2024-12-05 17:17:13.181347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:38.837 [2024-12-05 17:17:13.181363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:38.837 [2024-12-05 17:17:13.181370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.112 ms 00:30:38.837 [2024-12-05 17:17:13.181376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:39.781 [2024-12-05 17:17:13.918139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:39.781 [2024-12-05 17:17:13.918223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:39.781 [2024-12-05 17:17:13.918240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 736.164 ms 00:30:39.781 [2024-12-05 17:17:13.918249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:39.781 [2024-12-05 17:17:13.923475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:39.781 [2024-12-05 17:17:13.923531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:39.781 [2024-12-05 17:17:13.923541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.890 ms 00:30:39.781 [2024-12-05 17:17:13.923550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:39.781 [2024-12-05 17:17:13.924533] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:30:39.781 [2024-12-05 17:17:13.924582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:39.781 [2024-12-05 17:17:13.924591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:39.781 [2024-12-05 17:17:13.924602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.991 ms 00:30:39.781 [2024-12-05 17:17:13.924610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:39.781 [2024-12-05 17:17:13.924648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:39.781 [2024-12-05 17:17:13.924657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:39.781 [2024-12-05 17:17:13.924667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:39.781 [2024-12-05 17:17:13.924682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:39.781 [2024-12-05 17:17:13.924732] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 743.489 ms, result 0 00:30:39.781 [2024-12-05 17:17:13.924775] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:30:39.781 [2024-12-05 17:17:13.924940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:39.781 [2024-12-05 17:17:13.924984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:39.781 [2024-12-05 17:17:13.924995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.165 ms 00:30:39.781 [2024-12-05 17:17:13.925002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.428 [2024-12-05 17:17:14.674502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.428 [2024-12-05 17:17:14.674553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:40.428 [2024-12-05 17:17:14.674572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 748.355 ms 00:30:40.428 [2024-12-05 17:17:14.674578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.428 [2024-12-05 17:17:14.678078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.428 [2024-12-05 17:17:14.678108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:40.428 [2024-12-05 17:17:14.678116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.063 ms 00:30:40.428 [2024-12-05 17:17:14.678121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.428 [2024-12-05 17:17:14.678630] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:30:40.428 [2024-12-05 17:17:14.678658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.428 [2024-12-05 17:17:14.678664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:40.428 [2024-12-05 17:17:14.678671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.515 ms 00:30:40.428 [2024-12-05 17:17:14.678677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.428 [2024-12-05 17:17:14.678700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.428 [2024-12-05 17:17:14.678706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:40.428 [2024-12-05 17:17:14.678712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:40.428 [2024-12-05 17:17:14.678717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.428 [2024-12-05 17:17:14.678743] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 753.969 ms, result 0 00:30:40.428 [2024-12-05 17:17:14.678775] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:40.428 [2024-12-05 17:17:14.678782] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:40.428 [2024-12-05 17:17:14.678790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.428 [2024-12-05 17:17:14.678796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:30:40.428 [2024-12-05 17:17:14.678802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1497.566 ms 00:30:40.428 [2024-12-05 17:17:14.678807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.428 [2024-12-05 17:17:14.678828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.428 [2024-12-05 17:17:14.678837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:30:40.428 [2024-12-05 17:17:14.678844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:40.428 [2024-12-05 17:17:14.678850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.428 [2024-12-05 17:17:14.687417] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:40.428 [2024-12-05 17:17:14.687496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.428 [2024-12-05 17:17:14.687505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:40.428 [2024-12-05 17:17:14.687512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.635 ms 00:30:40.428 [2024-12-05 17:17:14.687517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.429 [2024-12-05 17:17:14.688056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.429 [2024-12-05 17:17:14.688076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:30:40.429 [2024-12-05 17:17:14.688085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.491 ms 00:30:40.429 [2024-12-05 17:17:14.688091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.429 [2024-12-05 17:17:14.689764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.429 [2024-12-05 17:17:14.689781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:30:40.429 [2024-12-05 17:17:14.689788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.661 ms 00:30:40.429 [2024-12-05 17:17:14.689795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.429 [2024-12-05 17:17:14.689823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.429 [2024-12-05 17:17:14.689830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:30:40.429 [2024-12-05 17:17:14.689836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:40.429 [2024-12-05 17:17:14.689844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.429 [2024-12-05 17:17:14.689919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.429 [2024-12-05 17:17:14.689926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:40.429 [2024-12-05 17:17:14.689932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:30:40.429 [2024-12-05 17:17:14.689938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.429 [2024-12-05 17:17:14.689962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.429 [2024-12-05 17:17:14.689969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:40.429 [2024-12-05 17:17:14.689975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:40.429 [2024-12-05 17:17:14.689981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.429 [2024-12-05 17:17:14.690007] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:40.429 [2024-12-05 17:17:14.690014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.429 [2024-12-05 17:17:14.690020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:40.429 [2024-12-05 17:17:14.690026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:40.429 [2024-12-05 17:17:14.690032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.429 [2024-12-05 17:17:14.690068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.429 [2024-12-05 17:17:14.690074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:40.429 [2024-12-05 17:17:14.690080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:30:40.429 [2024-12-05 17:17:14.690085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.429 [2024-12-05 17:17:14.690898] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1691.377 ms, result 0 00:30:40.429 [2024-12-05 17:17:14.703728] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.429 [2024-12-05 17:17:14.719733] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:40.429 [2024-12-05 17:17:14.727840] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:40.429 17:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:40.429 17:17:14 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:40.429 17:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:40.429 17:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:40.429 17:17:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:30:40.429 17:17:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:40.429 17:17:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:40.429 17:17:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:40.429 Validate MD5 checksum, iteration 1 00:30:40.429 17:17:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:40.429 17:17:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:40.429 17:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:40.429 17:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:40.429 17:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:40.429 17:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:40.429 17:17:14 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:40.695 [2024-12-05 17:17:14.818946] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:40.695 [2024-12-05 17:17:14.819072] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83349 ] 00:30:40.695 [2024-12-05 17:17:14.976379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.954 [2024-12-05 17:17:15.069300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.338  [2024-12-05T17:17:17.647Z] Copying: 583/1024 [MB] (583 MBps) [2024-12-05T17:17:18.588Z] Copying: 1024/1024 [MB] (average 582 MBps) 00:30:44.221 00:30:44.221 17:17:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:44.221 17:17:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:46.131 17:17:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:46.131 17:17:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=8315964310dbe37cda9f52286ac6676a 00:30:46.131 17:17:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 8315964310dbe37cda9f52286ac6676a != \8\3\1\5\9\6\4\3\1\0\d\b\e\3\7\c\d\a\9\f\5\2\2\8\6\a\c\6\6\7\6\a ]] 00:30:46.131 17:17:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:46.131 17:17:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:46.131 Validate MD5 checksum, iteration 2 00:30:46.131 17:17:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:46.132 17:17:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:46.132 17:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:46.132 17:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:46.132 17:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:46.132 17:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:46.132 17:17:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:46.132 [2024-12-05 17:17:20.428569] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:46.132 [2024-12-05 17:17:20.428660] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83416 ] 00:30:46.393 [2024-12-05 17:17:20.575706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.393 [2024-12-05 17:17:20.651339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:47.778  [2024-12-05T17:17:22.716Z] Copying: 663/1024 [MB] (663 MBps) [2024-12-05T17:17:28.005Z] Copying: 1024/1024 [MB] (average 681 MBps) 00:30:53.638 00:30:53.638 17:17:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:53.638 17:17:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:56.185 17:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:56.185 17:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=8c3bb7b700d1b9aed43f8bc476c3c883 00:30:56.185 17:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 8c3bb7b700d1b9aed43f8bc476c3c883 != \8\c\3\b\b\7\b\7\0\0\d\1\b\9\a\e\d\4\3\f\8\b\c\4\7\6\c\3\c\8\8\3 ]] 00:30:56.185 17:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:56.185 17:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:56.185 17:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:30:56.185 17:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:30:56.185 17:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:30:56.185 17:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:56.185 17:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:30:56.185 17:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:30:56.185 17:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:30:56.185 17:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:30:56.185 17:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83313 ]] 00:30:56.185 17:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83313 00:30:56.185 17:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83313 ']' 00:30:56.185 17:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83313 00:30:56.185 17:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:56.185 17:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:56.185 17:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83313 00:30:56.185 killing process with pid 83313 00:30:56.185 17:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:56.185 17:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:56.185 17:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83313' 00:30:56.185 17:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83313 00:30:56.185 17:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83313 00:30:56.447 [2024-12-05 17:17:30.686140] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:56.447 [2024-12-05 17:17:30.698245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.447 [2024-12-05 17:17:30.698275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:56.447 [2024-12-05 17:17:30.698285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:56.447 [2024-12-05 17:17:30.698291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.447 [2024-12-05 17:17:30.698309] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:56.447 [2024-12-05 17:17:30.700377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.447 [2024-12-05 17:17:30.700398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:56.447 [2024-12-05 17:17:30.700409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.059 ms 00:30:56.447 [2024-12-05 17:17:30.700415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.447 [2024-12-05 17:17:30.700601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.447 [2024-12-05 17:17:30.700609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:56.447 [2024-12-05 17:17:30.700616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.166 ms 00:30:56.447 [2024-12-05 17:17:30.700622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.447 [2024-12-05 17:17:30.702032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.447 [2024-12-05 17:17:30.702053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:56.447 [2024-12-05 17:17:30.702060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.398 ms 00:30:56.447 [2024-12-05 17:17:30.702068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.447 [2024-12-05 17:17:30.702929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.447 [2024-12-05 17:17:30.702942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:56.447 [2024-12-05 17:17:30.702963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.837 ms 00:30:56.447 [2024-12-05 17:17:30.702970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.447 [2024-12-05 17:17:30.711138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.447 [2024-12-05 17:17:30.711159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:56.447 [2024-12-05 17:17:30.711167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.135 ms 00:30:56.447 [2024-12-05 17:17:30.711177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.448 [2024-12-05 17:17:30.715629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.448 [2024-12-05 17:17:30.715651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:56.448 [2024-12-05 17:17:30.715659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.426 ms 00:30:56.448 [2024-12-05 17:17:30.715665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.448 [2024-12-05 17:17:30.715725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.448 [2024-12-05 17:17:30.715732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:56.448 [2024-12-05 17:17:30.715738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:30:56.448 [2024-12-05 17:17:30.715748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.448 [2024-12-05 17:17:30.723465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.448 [2024-12-05 17:17:30.723486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:56.448 [2024-12-05 17:17:30.723493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.704 ms 00:30:56.448 [2024-12-05 17:17:30.723499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.448 [2024-12-05 17:17:30.731372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.448 [2024-12-05 17:17:30.731392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:56.448 [2024-12-05 17:17:30.731399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.849 ms 00:30:56.448 [2024-12-05 17:17:30.731404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.448 [2024-12-05 17:17:30.738896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.448 [2024-12-05 17:17:30.738916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:56.448 [2024-12-05 17:17:30.738923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.467 ms 00:30:56.448 [2024-12-05 17:17:30.738928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.448 [2024-12-05 17:17:30.746500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.448 [2024-12-05 17:17:30.746520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:56.448 [2024-12-05 17:17:30.746526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.511 ms 00:30:56.448 [2024-12-05 17:17:30.746532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.448 [2024-12-05 17:17:30.746555] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:56.448 [2024-12-05 17:17:30.746566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:56.448 [2024-12-05 17:17:30.746574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:56.448 [2024-12-05 17:17:30.746581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:56.448 [2024-12-05 17:17:30.746587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:56.448 [2024-12-05 17:17:30.746593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:56.448 [2024-12-05 17:17:30.746599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:56.448 [2024-12-05 17:17:30.746605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:56.448 [2024-12-05 17:17:30.746611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:56.448 [2024-12-05 17:17:30.746617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:56.448 [2024-12-05 17:17:30.746622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:56.448 [2024-12-05 17:17:30.746628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:56.448 [2024-12-05 17:17:30.746634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:56.448 [2024-12-05 17:17:30.746640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:56.448 [2024-12-05 17:17:30.746646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:56.448 [2024-12-05 17:17:30.746651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:56.448 [2024-12-05 17:17:30.746657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:56.448 [2024-12-05 17:17:30.746663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:56.448 [2024-12-05 17:17:30.746669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:56.448 [2024-12-05 17:17:30.746676] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:56.448 [2024-12-05 17:17:30.746682] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 8a3521ab-2d4e-46df-a40d-788f9cd87bb7 00:30:56.448 [2024-12-05 17:17:30.746689] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:56.448 [2024-12-05 17:17:30.746694] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:30:56.448 [2024-12-05 17:17:30.746700] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:30:56.448 [2024-12-05 17:17:30.746706] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:30:56.448 [2024-12-05 17:17:30.746711] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:56.448 [2024-12-05 17:17:30.746717] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:56.448 [2024-12-05 17:17:30.746726] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:56.448 [2024-12-05 17:17:30.746731] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:56.448 [2024-12-05 17:17:30.746736] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:56.448 [2024-12-05 17:17:30.746742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.448 [2024-12-05 17:17:30.746748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:56.448 [2024-12-05 17:17:30.746755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.188 ms 00:30:56.448 [2024-12-05 17:17:30.746760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.448 [2024-12-05 17:17:30.756363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.448 [2024-12-05 17:17:30.756384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:56.448 [2024-12-05 17:17:30.756392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.590 ms 00:30:56.448 [2024-12-05 17:17:30.756398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.448 [2024-12-05 17:17:30.756667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.448 [2024-12-05 17:17:30.756679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:56.448 [2024-12-05 17:17:30.756694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.251 ms 00:30:56.448 [2024-12-05 17:17:30.756700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.448 [2024-12-05 17:17:30.789551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:56.448 [2024-12-05 17:17:30.789573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:56.448 [2024-12-05 17:17:30.789581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:56.448 [2024-12-05 17:17:30.789587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.448 [2024-12-05 17:17:30.789611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:56.448 [2024-12-05 17:17:30.789617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:56.448 [2024-12-05 17:17:30.789624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:56.448 [2024-12-05 17:17:30.789630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.448 [2024-12-05 17:17:30.789691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:56.448 [2024-12-05 17:17:30.789699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:56.448 [2024-12-05 17:17:30.789705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:56.448 [2024-12-05 17:17:30.789711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.448 [2024-12-05 17:17:30.789726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:56.448 [2024-12-05 17:17:30.789734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:56.448 [2024-12-05 17:17:30.789740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:56.448 [2024-12-05 17:17:30.789745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.710 [2024-12-05 17:17:30.849781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:56.710 [2024-12-05 17:17:30.849808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:56.710 [2024-12-05 17:17:30.849816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:56.710 [2024-12-05 17:17:30.849822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.710 [2024-12-05 17:17:30.899090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:56.710 [2024-12-05 17:17:30.899115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:56.710 [2024-12-05 17:17:30.899123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:56.710 [2024-12-05 17:17:30.899130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.710 [2024-12-05 17:17:30.899176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:56.710 [2024-12-05 17:17:30.899184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:56.710 [2024-12-05 17:17:30.899190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:56.710 [2024-12-05 17:17:30.899196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.710 [2024-12-05 17:17:30.899238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:56.710 [2024-12-05 17:17:30.899253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:56.710 [2024-12-05 17:17:30.899259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:56.710 [2024-12-05 17:17:30.899265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.710 [2024-12-05 17:17:30.899333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:56.710 [2024-12-05 17:17:30.899341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:56.710 [2024-12-05 17:17:30.899347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:56.710 [2024-12-05 17:17:30.899352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.710 [2024-12-05 17:17:30.899376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:56.710 [2024-12-05 17:17:30.899383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:56.710 [2024-12-05 17:17:30.899390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:56.710 [2024-12-05 17:17:30.899396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.710 [2024-12-05 17:17:30.899423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:56.710 [2024-12-05 17:17:30.899430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:56.710 [2024-12-05 17:17:30.899437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:56.710 [2024-12-05 17:17:30.899443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.710 [2024-12-05 17:17:30.899474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:56.710 [2024-12-05 17:17:30.899483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:56.710 [2024-12-05 17:17:30.899489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:56.710 [2024-12-05 17:17:30.899495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.710 [2024-12-05 17:17:30.899581] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 201.326 ms, result 0 00:30:57.280 17:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:57.280 17:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:57.280 17:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:30:57.280 17:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:30:57.280 17:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:30:57.280 17:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:57.280 Remove shared memory files 00:30:57.280 17:17:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:30:57.280 17:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:57.280 17:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:57.280 17:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:57.280 17:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid83093 00:30:57.280 17:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:57.280 17:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:57.281 00:30:57.281 real 1m26.459s 00:30:57.281 user 1m58.580s 00:30:57.281 sys 0m17.618s 00:30:57.281 17:17:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:57.281 ************************************ 00:30:57.281 END TEST ftl_upgrade_shutdown 00:30:57.281 ************************************ 00:30:57.281 17:17:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:57.281 17:17:31 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:30:57.281 17:17:31 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:57.281 17:17:31 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:30:57.281 17:17:31 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:57.281 17:17:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:57.281 ************************************ 00:30:57.281 START TEST ftl_restore_fast 00:30:57.281 ************************************ 00:30:57.281 17:17:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:57.543 * Looking for test storage... 00:30:57.543 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lcov --version 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:57.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.543 --rc genhtml_branch_coverage=1 00:30:57.543 --rc genhtml_function_coverage=1 00:30:57.543 --rc genhtml_legend=1 00:30:57.543 --rc geninfo_all_blocks=1 00:30:57.543 --rc geninfo_unexecuted_blocks=1 00:30:57.543 00:30:57.543 ' 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:57.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.543 --rc genhtml_branch_coverage=1 00:30:57.543 --rc genhtml_function_coverage=1 00:30:57.543 --rc genhtml_legend=1 00:30:57.543 --rc geninfo_all_blocks=1 00:30:57.543 --rc geninfo_unexecuted_blocks=1 00:30:57.543 00:30:57.543 ' 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:57.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.543 --rc genhtml_branch_coverage=1 00:30:57.543 --rc genhtml_function_coverage=1 00:30:57.543 --rc genhtml_legend=1 00:30:57.543 --rc geninfo_all_blocks=1 00:30:57.543 --rc geninfo_unexecuted_blocks=1 00:30:57.543 00:30:57.543 ' 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:57.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.543 --rc genhtml_branch_coverage=1 00:30:57.543 --rc genhtml_function_coverage=1 00:30:57.543 --rc genhtml_legend=1 00:30:57.543 --rc geninfo_all_blocks=1 00:30:57.543 --rc geninfo_unexecuted_blocks=1 00:30:57.543 00:30:57.543 ' 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:57.543 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.12j5xmBrpL 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=83608 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 83608 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 83608 ']' 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:57.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:57.544 17:17:31 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:30:57.544 [2024-12-05 17:17:31.873980] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:30:57.544 [2024-12-05 17:17:31.874732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83608 ] 00:30:57.805 [2024-12-05 17:17:32.039031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.805 [2024-12-05 17:17:32.122613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:58.377 17:17:32 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:58.377 17:17:32 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:30:58.377 17:17:32 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:30:58.377 17:17:32 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:30:58.377 17:17:32 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:58.377 17:17:32 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:30:58.377 17:17:32 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:30:58.377 17:17:32 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:30:58.637 17:17:32 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:30:58.637 17:17:32 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:30:58.637 17:17:32 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:30:58.637 17:17:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:30:58.637 17:17:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:58.637 17:17:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:58.637 17:17:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:58.637 17:17:32 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:30:58.897 17:17:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:58.897 { 00:30:58.897 "name": "nvme0n1", 00:30:58.897 "aliases": [ 00:30:58.897 "47e8d24a-40f2-466a-a3a7-ce2b6c2e037d" 00:30:58.897 ], 00:30:58.897 "product_name": "NVMe disk", 00:30:58.897 "block_size": 4096, 00:30:58.897 "num_blocks": 1310720, 00:30:58.897 "uuid": "47e8d24a-40f2-466a-a3a7-ce2b6c2e037d", 00:30:58.897 "numa_id": -1, 00:30:58.897 "assigned_rate_limits": { 00:30:58.897 "rw_ios_per_sec": 0, 00:30:58.897 "rw_mbytes_per_sec": 0, 00:30:58.897 "r_mbytes_per_sec": 0, 00:30:58.897 "w_mbytes_per_sec": 0 00:30:58.897 }, 00:30:58.897 "claimed": true, 00:30:58.897 "claim_type": "read_many_write_one", 00:30:58.897 "zoned": false, 00:30:58.897 "supported_io_types": { 00:30:58.897 "read": true, 00:30:58.897 "write": true, 00:30:58.897 "unmap": true, 00:30:58.897 "flush": true, 00:30:58.897 "reset": true, 00:30:58.897 "nvme_admin": true, 00:30:58.897 "nvme_io": true, 00:30:58.897 "nvme_io_md": false, 00:30:58.897 "write_zeroes": true, 00:30:58.897 "zcopy": false, 00:30:58.897 "get_zone_info": false, 00:30:58.897 "zone_management": false, 00:30:58.897 "zone_append": false, 00:30:58.897 "compare": true, 00:30:58.897 "compare_and_write": false, 00:30:58.897 "abort": true, 00:30:58.897 "seek_hole": false, 00:30:58.897 "seek_data": false, 00:30:58.897 "copy": true, 00:30:58.897 "nvme_iov_md": false 00:30:58.897 }, 00:30:58.897 "driver_specific": { 00:30:58.897 "nvme": [ 00:30:58.897 { 00:30:58.897 "pci_address": "0000:00:11.0", 00:30:58.897 "trid": { 00:30:58.897 "trtype": "PCIe", 00:30:58.897 "traddr": "0000:00:11.0" 00:30:58.897 }, 00:30:58.897 "ctrlr_data": { 00:30:58.897 "cntlid": 0, 00:30:58.897 "vendor_id": "0x1b36", 00:30:58.897 "model_number": "QEMU NVMe Ctrl", 00:30:58.897 "serial_number": "12341", 00:30:58.897 "firmware_revision": "8.0.0", 00:30:58.897 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:58.897 "oacs": { 00:30:58.897 "security": 0, 00:30:58.897 "format": 1, 00:30:58.897 "firmware": 0, 00:30:58.897 "ns_manage": 1 00:30:58.897 }, 00:30:58.897 "multi_ctrlr": false, 00:30:58.897 "ana_reporting": false 00:30:58.897 }, 00:30:58.897 "vs": { 00:30:58.897 "nvme_version": "1.4" 00:30:58.897 }, 00:30:58.897 "ns_data": { 00:30:58.897 "id": 1, 00:30:58.897 "can_share": false 00:30:58.897 } 00:30:58.897 } 00:30:58.897 ], 00:30:58.897 "mp_policy": "active_passive" 00:30:58.897 } 00:30:58.897 } 00:30:58.897 ]' 00:30:58.897 17:17:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:58.897 17:17:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:30:58.897 17:17:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:58.897 17:17:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:58.897 17:17:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:58.898 17:17:33 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:30:58.898 17:17:33 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:30:58.898 17:17:33 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:30:58.898 17:17:33 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:30:58.898 17:17:33 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:58.898 17:17:33 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:59.156 17:17:33 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=d94f0693-dc4f-4e92-b5ee-feabb40b6ef8 00:30:59.156 17:17:33 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:30:59.156 17:17:33 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d94f0693-dc4f-4e92-b5ee-feabb40b6ef8 00:30:59.417 17:17:33 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:30:59.676 17:17:33 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=76f6fa8d-f5c1-4aa3-8d44-ce8380410518 00:30:59.676 17:17:33 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 76f6fa8d-f5c1-4aa3-8d44-ce8380410518 00:30:59.937 17:17:34 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=9e7e9bec-5efd-4cc7-810a-cd278768ecb8 00:30:59.937 17:17:34 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:30:59.937 17:17:34 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9e7e9bec-5efd-4cc7-810a-cd278768ecb8 00:30:59.937 17:17:34 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:30:59.937 17:17:34 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:59.937 17:17:34 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=9e7e9bec-5efd-4cc7-810a-cd278768ecb8 00:30:59.937 17:17:34 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:30:59.937 17:17:34 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 9e7e9bec-5efd-4cc7-810a-cd278768ecb8 00:30:59.937 17:17:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=9e7e9bec-5efd-4cc7-810a-cd278768ecb8 00:30:59.937 17:17:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:59.937 17:17:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:30:59.937 17:17:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:30:59.937 17:17:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9e7e9bec-5efd-4cc7-810a-cd278768ecb8 00:31:00.198 17:17:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:00.198 { 00:31:00.198 "name": "9e7e9bec-5efd-4cc7-810a-cd278768ecb8", 00:31:00.198 "aliases": [ 00:31:00.198 "lvs/nvme0n1p0" 00:31:00.198 ], 00:31:00.198 "product_name": "Logical Volume", 00:31:00.198 "block_size": 4096, 00:31:00.198 "num_blocks": 26476544, 00:31:00.198 "uuid": "9e7e9bec-5efd-4cc7-810a-cd278768ecb8", 00:31:00.198 "assigned_rate_limits": { 00:31:00.198 "rw_ios_per_sec": 0, 00:31:00.198 "rw_mbytes_per_sec": 0, 00:31:00.198 "r_mbytes_per_sec": 0, 00:31:00.198 "w_mbytes_per_sec": 0 00:31:00.198 }, 00:31:00.198 "claimed": false, 00:31:00.198 "zoned": false, 00:31:00.198 "supported_io_types": { 00:31:00.198 "read": true, 00:31:00.198 "write": true, 00:31:00.198 "unmap": true, 00:31:00.198 "flush": false, 00:31:00.198 "reset": true, 00:31:00.198 "nvme_admin": false, 00:31:00.198 "nvme_io": false, 00:31:00.198 "nvme_io_md": false, 00:31:00.198 "write_zeroes": true, 00:31:00.198 "zcopy": false, 00:31:00.198 "get_zone_info": false, 00:31:00.198 "zone_management": false, 00:31:00.198 "zone_append": false, 00:31:00.198 "compare": false, 00:31:00.198 "compare_and_write": false, 00:31:00.198 "abort": false, 00:31:00.198 "seek_hole": true, 00:31:00.198 "seek_data": true, 00:31:00.198 "copy": false, 00:31:00.198 "nvme_iov_md": false 00:31:00.198 }, 00:31:00.198 "driver_specific": { 00:31:00.198 "lvol": { 00:31:00.198 "lvol_store_uuid": "76f6fa8d-f5c1-4aa3-8d44-ce8380410518", 00:31:00.198 "base_bdev": "nvme0n1", 00:31:00.198 "thin_provision": true, 00:31:00.198 "num_allocated_clusters": 0, 00:31:00.198 "snapshot": false, 00:31:00.198 "clone": false, 00:31:00.198 "esnap_clone": false 00:31:00.198 } 00:31:00.198 } 00:31:00.198 } 00:31:00.198 ]' 00:31:00.198 17:17:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:00.198 17:17:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:00.198 17:17:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:00.198 17:17:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:00.198 17:17:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:00.198 17:17:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:00.198 17:17:34 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:31:00.198 17:17:34 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:31:00.198 17:17:34 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:00.459 17:17:34 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:00.459 17:17:34 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:00.459 17:17:34 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 9e7e9bec-5efd-4cc7-810a-cd278768ecb8 00:31:00.459 17:17:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=9e7e9bec-5efd-4cc7-810a-cd278768ecb8 00:31:00.459 17:17:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:00.459 17:17:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:00.459 17:17:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:00.459 17:17:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9e7e9bec-5efd-4cc7-810a-cd278768ecb8 00:31:00.720 17:17:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:00.720 { 00:31:00.720 "name": "9e7e9bec-5efd-4cc7-810a-cd278768ecb8", 00:31:00.720 "aliases": [ 00:31:00.720 "lvs/nvme0n1p0" 00:31:00.720 ], 00:31:00.720 "product_name": "Logical Volume", 00:31:00.720 "block_size": 4096, 00:31:00.720 "num_blocks": 26476544, 00:31:00.720 "uuid": "9e7e9bec-5efd-4cc7-810a-cd278768ecb8", 00:31:00.720 "assigned_rate_limits": { 00:31:00.720 "rw_ios_per_sec": 0, 00:31:00.720 "rw_mbytes_per_sec": 0, 00:31:00.720 "r_mbytes_per_sec": 0, 00:31:00.720 "w_mbytes_per_sec": 0 00:31:00.720 }, 00:31:00.720 "claimed": false, 00:31:00.720 "zoned": false, 00:31:00.720 "supported_io_types": { 00:31:00.720 "read": true, 00:31:00.720 "write": true, 00:31:00.720 "unmap": true, 00:31:00.720 "flush": false, 00:31:00.720 "reset": true, 00:31:00.720 "nvme_admin": false, 00:31:00.720 "nvme_io": false, 00:31:00.720 "nvme_io_md": false, 00:31:00.720 "write_zeroes": true, 00:31:00.720 "zcopy": false, 00:31:00.720 "get_zone_info": false, 00:31:00.720 "zone_management": false, 00:31:00.720 "zone_append": false, 00:31:00.720 "compare": false, 00:31:00.720 "compare_and_write": false, 00:31:00.720 "abort": false, 00:31:00.720 "seek_hole": true, 00:31:00.720 "seek_data": true, 00:31:00.720 "copy": false, 00:31:00.720 "nvme_iov_md": false 00:31:00.720 }, 00:31:00.720 "driver_specific": { 00:31:00.720 "lvol": { 00:31:00.720 "lvol_store_uuid": "76f6fa8d-f5c1-4aa3-8d44-ce8380410518", 00:31:00.720 "base_bdev": "nvme0n1", 00:31:00.720 "thin_provision": true, 00:31:00.720 "num_allocated_clusters": 0, 00:31:00.720 "snapshot": false, 00:31:00.720 "clone": false, 00:31:00.720 "esnap_clone": false 00:31:00.720 } 00:31:00.720 } 00:31:00.720 } 00:31:00.720 ]' 00:31:00.720 17:17:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:00.720 17:17:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:00.720 17:17:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:00.720 17:17:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:00.720 17:17:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:00.720 17:17:34 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:00.720 17:17:34 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:31:00.720 17:17:34 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:00.720 17:17:35 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:31:00.720 17:17:35 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 9e7e9bec-5efd-4cc7-810a-cd278768ecb8 00:31:00.720 17:17:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=9e7e9bec-5efd-4cc7-810a-cd278768ecb8 00:31:00.720 17:17:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:00.720 17:17:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:00.720 17:17:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:00.720 17:17:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9e7e9bec-5efd-4cc7-810a-cd278768ecb8 00:31:00.979 17:17:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:00.979 { 00:31:00.979 "name": "9e7e9bec-5efd-4cc7-810a-cd278768ecb8", 00:31:00.979 "aliases": [ 00:31:00.979 "lvs/nvme0n1p0" 00:31:00.979 ], 00:31:00.979 "product_name": "Logical Volume", 00:31:00.979 "block_size": 4096, 00:31:00.979 "num_blocks": 26476544, 00:31:00.979 "uuid": "9e7e9bec-5efd-4cc7-810a-cd278768ecb8", 00:31:00.979 "assigned_rate_limits": { 00:31:00.979 "rw_ios_per_sec": 0, 00:31:00.979 "rw_mbytes_per_sec": 0, 00:31:00.979 "r_mbytes_per_sec": 0, 00:31:00.979 "w_mbytes_per_sec": 0 00:31:00.979 }, 00:31:00.979 "claimed": false, 00:31:00.979 "zoned": false, 00:31:00.979 "supported_io_types": { 00:31:00.979 "read": true, 00:31:00.979 "write": true, 00:31:00.979 "unmap": true, 00:31:00.979 "flush": false, 00:31:00.979 "reset": true, 00:31:00.979 "nvme_admin": false, 00:31:00.979 "nvme_io": false, 00:31:00.979 "nvme_io_md": false, 00:31:00.979 "write_zeroes": true, 00:31:00.979 "zcopy": false, 00:31:00.979 "get_zone_info": false, 00:31:00.979 "zone_management": false, 00:31:00.979 "zone_append": false, 00:31:00.979 "compare": false, 00:31:00.979 "compare_and_write": false, 00:31:00.979 "abort": false, 00:31:00.979 "seek_hole": true, 00:31:00.979 "seek_data": true, 00:31:00.979 "copy": false, 00:31:00.979 "nvme_iov_md": false 00:31:00.979 }, 00:31:00.979 "driver_specific": { 00:31:00.979 "lvol": { 00:31:00.979 "lvol_store_uuid": "76f6fa8d-f5c1-4aa3-8d44-ce8380410518", 00:31:00.980 "base_bdev": "nvme0n1", 00:31:00.980 "thin_provision": true, 00:31:00.980 "num_allocated_clusters": 0, 00:31:00.980 "snapshot": false, 00:31:00.980 "clone": false, 00:31:00.980 "esnap_clone": false 00:31:00.980 } 00:31:00.980 } 00:31:00.980 } 00:31:00.980 ]' 00:31:00.980 17:17:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:00.980 17:17:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:00.980 17:17:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:00.980 17:17:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:00.980 17:17:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:00.980 17:17:35 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:00.980 17:17:35 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:31:00.980 17:17:35 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 9e7e9bec-5efd-4cc7-810a-cd278768ecb8 --l2p_dram_limit 10' 00:31:00.980 17:17:35 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:31:00.980 17:17:35 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:31:00.980 17:17:35 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:31:00.980 17:17:35 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:31:00.980 17:17:35 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:31:00.980 17:17:35 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9e7e9bec-5efd-4cc7-810a-cd278768ecb8 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:31:01.240 [2024-12-05 17:17:35.526246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.240 [2024-12-05 17:17:35.526288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:01.240 [2024-12-05 17:17:35.526300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:01.240 [2024-12-05 17:17:35.526307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.240 [2024-12-05 17:17:35.526352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.240 [2024-12-05 17:17:35.526360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:01.240 [2024-12-05 17:17:35.526368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:31:01.240 [2024-12-05 17:17:35.526374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.240 [2024-12-05 17:17:35.526393] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:01.240 [2024-12-05 17:17:35.526926] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:01.240 [2024-12-05 17:17:35.526947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.240 [2024-12-05 17:17:35.526964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:01.240 [2024-12-05 17:17:35.526974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:31:01.240 [2024-12-05 17:17:35.526981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.240 [2024-12-05 17:17:35.527032] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f853cac7-bcba-40a7-941e-823694b449b9 00:31:01.240 [2024-12-05 17:17:35.527976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.240 [2024-12-05 17:17:35.528000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:01.240 [2024-12-05 17:17:35.528008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:31:01.240 [2024-12-05 17:17:35.528018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.240 [2024-12-05 17:17:35.532801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.240 [2024-12-05 17:17:35.532834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:01.240 [2024-12-05 17:17:35.532842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.752 ms 00:31:01.240 [2024-12-05 17:17:35.532850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.240 [2024-12-05 17:17:35.532914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.240 [2024-12-05 17:17:35.532923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:01.240 [2024-12-05 17:17:35.532929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:31:01.240 [2024-12-05 17:17:35.532939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.240 [2024-12-05 17:17:35.532984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.240 [2024-12-05 17:17:35.532994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:01.240 [2024-12-05 17:17:35.533001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:31:01.240 [2024-12-05 17:17:35.533008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.240 [2024-12-05 17:17:35.533024] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:01.240 [2024-12-05 17:17:35.535850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.240 [2024-12-05 17:17:35.535878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:01.240 [2024-12-05 17:17:35.535887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.828 ms 00:31:01.240 [2024-12-05 17:17:35.535892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.240 [2024-12-05 17:17:35.535919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.240 [2024-12-05 17:17:35.535926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:01.240 [2024-12-05 17:17:35.535933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:01.240 [2024-12-05 17:17:35.535939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.240 [2024-12-05 17:17:35.535962] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:01.240 [2024-12-05 17:17:35.536070] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:01.240 [2024-12-05 17:17:35.536083] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:01.240 [2024-12-05 17:17:35.536092] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:01.240 [2024-12-05 17:17:35.536101] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:01.240 [2024-12-05 17:17:35.536107] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:01.240 [2024-12-05 17:17:35.536115] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:01.240 [2024-12-05 17:17:35.536121] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:01.240 [2024-12-05 17:17:35.536130] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:01.240 [2024-12-05 17:17:35.536136] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:01.240 [2024-12-05 17:17:35.536144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.240 [2024-12-05 17:17:35.536154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:01.240 [2024-12-05 17:17:35.536161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:31:01.240 [2024-12-05 17:17:35.536166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.240 [2024-12-05 17:17:35.536232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.240 [2024-12-05 17:17:35.536239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:01.240 [2024-12-05 17:17:35.536246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:01.240 [2024-12-05 17:17:35.536252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.240 [2024-12-05 17:17:35.536329] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:01.240 [2024-12-05 17:17:35.536340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:01.240 [2024-12-05 17:17:35.536348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:01.240 [2024-12-05 17:17:35.536354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.240 [2024-12-05 17:17:35.536361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:01.240 [2024-12-05 17:17:35.536366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:01.240 [2024-12-05 17:17:35.536372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:01.240 [2024-12-05 17:17:35.536378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:01.240 [2024-12-05 17:17:35.536386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:01.240 [2024-12-05 17:17:35.536391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:01.241 [2024-12-05 17:17:35.536397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:01.241 [2024-12-05 17:17:35.536402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:01.241 [2024-12-05 17:17:35.536408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:01.241 [2024-12-05 17:17:35.536413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:01.241 [2024-12-05 17:17:35.536420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:01.241 [2024-12-05 17:17:35.536426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.241 [2024-12-05 17:17:35.536434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:01.241 [2024-12-05 17:17:35.536440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:01.241 [2024-12-05 17:17:35.536447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.241 [2024-12-05 17:17:35.536452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:01.241 [2024-12-05 17:17:35.536458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:01.241 [2024-12-05 17:17:35.536462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:01.241 [2024-12-05 17:17:35.536469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:01.241 [2024-12-05 17:17:35.536474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:01.241 [2024-12-05 17:17:35.536480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:01.241 [2024-12-05 17:17:35.536485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:01.241 [2024-12-05 17:17:35.536491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:01.241 [2024-12-05 17:17:35.536495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:01.241 [2024-12-05 17:17:35.536502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:01.241 [2024-12-05 17:17:35.536507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:01.241 [2024-12-05 17:17:35.536512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:01.241 [2024-12-05 17:17:35.536517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:01.241 [2024-12-05 17:17:35.536525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:01.241 [2024-12-05 17:17:35.536530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:01.241 [2024-12-05 17:17:35.536537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:01.241 [2024-12-05 17:17:35.536542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:01.241 [2024-12-05 17:17:35.536548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:01.241 [2024-12-05 17:17:35.536553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:01.241 [2024-12-05 17:17:35.536559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:01.241 [2024-12-05 17:17:35.536564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.241 [2024-12-05 17:17:35.536570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:01.241 [2024-12-05 17:17:35.536575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:01.241 [2024-12-05 17:17:35.536581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.241 [2024-12-05 17:17:35.536585] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:01.241 [2024-12-05 17:17:35.536592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:01.241 [2024-12-05 17:17:35.536597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:01.241 [2024-12-05 17:17:35.536604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:01.241 [2024-12-05 17:17:35.536613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:01.241 [2024-12-05 17:17:35.536621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:01.241 [2024-12-05 17:17:35.536626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:01.241 [2024-12-05 17:17:35.536632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:01.241 [2024-12-05 17:17:35.536637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:01.241 [2024-12-05 17:17:35.536643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:01.241 [2024-12-05 17:17:35.536650] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:01.241 [2024-12-05 17:17:35.536659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:01.241 [2024-12-05 17:17:35.536667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:01.241 [2024-12-05 17:17:35.536674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:01.241 [2024-12-05 17:17:35.536679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:01.241 [2024-12-05 17:17:35.536702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:01.241 [2024-12-05 17:17:35.536708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:01.241 [2024-12-05 17:17:35.536715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:01.241 [2024-12-05 17:17:35.536720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:01.241 [2024-12-05 17:17:35.536727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:01.241 [2024-12-05 17:17:35.536732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:01.241 [2024-12-05 17:17:35.536741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:01.241 [2024-12-05 17:17:35.536746] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:01.241 [2024-12-05 17:17:35.536753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:01.241 [2024-12-05 17:17:35.536758] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:01.241 [2024-12-05 17:17:35.536765] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:01.241 [2024-12-05 17:17:35.536771] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:01.241 [2024-12-05 17:17:35.536778] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:01.241 [2024-12-05 17:17:35.536784] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:01.241 [2024-12-05 17:17:35.536790] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:01.241 [2024-12-05 17:17:35.536796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:01.241 [2024-12-05 17:17:35.536802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:01.241 [2024-12-05 17:17:35.536808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:01.241 [2024-12-05 17:17:35.536815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:01.241 [2024-12-05 17:17:35.536821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:31:01.241 [2024-12-05 17:17:35.536828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:01.241 [2024-12-05 17:17:35.536858] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:01.241 [2024-12-05 17:17:35.536868] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:05.448 [2024-12-05 17:17:39.236925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.448 [2024-12-05 17:17:39.236992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:05.448 [2024-12-05 17:17:39.237007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3700.053 ms 00:31:05.448 [2024-12-05 17:17:39.237018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.448 [2024-12-05 17:17:39.259603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.448 [2024-12-05 17:17:39.259645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:05.448 [2024-12-05 17:17:39.259655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.383 ms 00:31:05.448 [2024-12-05 17:17:39.259663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.448 [2024-12-05 17:17:39.259756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.448 [2024-12-05 17:17:39.259765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:05.448 [2024-12-05 17:17:39.259773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:31:05.448 [2024-12-05 17:17:39.259784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.448 [2024-12-05 17:17:39.283748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.448 [2024-12-05 17:17:39.283781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:05.448 [2024-12-05 17:17:39.283789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.928 ms 00:31:05.448 [2024-12-05 17:17:39.283798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.448 [2024-12-05 17:17:39.283820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.448 [2024-12-05 17:17:39.283830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:05.448 [2024-12-05 17:17:39.283836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:05.448 [2024-12-05 17:17:39.283848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.448 [2024-12-05 17:17:39.284171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.448 [2024-12-05 17:17:39.284194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:05.448 [2024-12-05 17:17:39.284202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:31:05.448 [2024-12-05 17:17:39.284210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.448 [2024-12-05 17:17:39.284289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.448 [2024-12-05 17:17:39.284306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:05.448 [2024-12-05 17:17:39.284314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:31:05.448 [2024-12-05 17:17:39.284323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.449 [2024-12-05 17:17:39.295666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.449 [2024-12-05 17:17:39.295699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:05.449 [2024-12-05 17:17:39.295706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.330 ms 00:31:05.449 [2024-12-05 17:17:39.295714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.449 [2024-12-05 17:17:39.318660] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:05.449 [2024-12-05 17:17:39.321754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.449 [2024-12-05 17:17:39.321793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:05.449 [2024-12-05 17:17:39.321810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.983 ms 00:31:05.449 [2024-12-05 17:17:39.321819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.449 [2024-12-05 17:17:39.391416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.449 [2024-12-05 17:17:39.391459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:05.449 [2024-12-05 17:17:39.391479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.553 ms 00:31:05.449 [2024-12-05 17:17:39.391492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.449 [2024-12-05 17:17:39.391725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.449 [2024-12-05 17:17:39.391749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:05.449 [2024-12-05 17:17:39.391768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:31:05.449 [2024-12-05 17:17:39.391780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.449 [2024-12-05 17:17:39.415307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.449 [2024-12-05 17:17:39.415342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:05.449 [2024-12-05 17:17:39.415360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.482 ms 00:31:05.449 [2024-12-05 17:17:39.415372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.449 [2024-12-05 17:17:39.438357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.449 [2024-12-05 17:17:39.438390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:05.449 [2024-12-05 17:17:39.438407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.935 ms 00:31:05.449 [2024-12-05 17:17:39.438418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.449 [2024-12-05 17:17:39.439060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.449 [2024-12-05 17:17:39.439085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:05.449 [2024-12-05 17:17:39.439101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:31:05.449 [2024-12-05 17:17:39.439115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.449 [2024-12-05 17:17:39.500481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.449 [2024-12-05 17:17:39.500512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:05.449 [2024-12-05 17:17:39.500530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.322 ms 00:31:05.449 [2024-12-05 17:17:39.500539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.449 [2024-12-05 17:17:39.518879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.449 [2024-12-05 17:17:39.518909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:05.449 [2024-12-05 17:17:39.518923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.268 ms 00:31:05.449 [2024-12-05 17:17:39.518932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.449 [2024-12-05 17:17:39.536597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.449 [2024-12-05 17:17:39.536625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:05.449 [2024-12-05 17:17:39.536638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.620 ms 00:31:05.449 [2024-12-05 17:17:39.536646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.449 [2024-12-05 17:17:39.554410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.449 [2024-12-05 17:17:39.554438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:05.449 [2024-12-05 17:17:39.554448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.726 ms 00:31:05.449 [2024-12-05 17:17:39.554455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.449 [2024-12-05 17:17:39.554487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.449 [2024-12-05 17:17:39.554495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:05.449 [2024-12-05 17:17:39.554505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:05.449 [2024-12-05 17:17:39.554510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.449 [2024-12-05 17:17:39.554568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.449 [2024-12-05 17:17:39.554577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:05.449 [2024-12-05 17:17:39.554585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:31:05.449 [2024-12-05 17:17:39.554590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.449 [2024-12-05 17:17:39.555464] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4028.888 ms, result 0 00:31:05.449 { 00:31:05.449 "name": "ftl0", 00:31:05.449 "uuid": "f853cac7-bcba-40a7-941e-823694b449b9" 00:31:05.449 } 00:31:05.449 17:17:39 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:31:05.449 17:17:39 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:31:05.449 17:17:39 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:31:05.449 17:17:39 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:31:05.709 [2024-12-05 17:17:39.922897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.709 [2024-12-05 17:17:39.922940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:05.709 [2024-12-05 17:17:39.922959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:05.709 [2024-12-05 17:17:39.922967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.709 [2024-12-05 17:17:39.922985] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:05.709 [2024-12-05 17:17:39.925127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.709 [2024-12-05 17:17:39.925150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:05.709 [2024-12-05 17:17:39.925160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.127 ms 00:31:05.709 [2024-12-05 17:17:39.925167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.709 [2024-12-05 17:17:39.925364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.709 [2024-12-05 17:17:39.925379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:05.709 [2024-12-05 17:17:39.925387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:31:05.709 [2024-12-05 17:17:39.925393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.709 [2024-12-05 17:17:39.927837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.709 [2024-12-05 17:17:39.927856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:05.709 [2024-12-05 17:17:39.927864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.432 ms 00:31:05.709 [2024-12-05 17:17:39.927871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.709 [2024-12-05 17:17:39.932547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.709 [2024-12-05 17:17:39.932576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:05.709 [2024-12-05 17:17:39.932587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.660 ms 00:31:05.709 [2024-12-05 17:17:39.932593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.709 [2024-12-05 17:17:39.951034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.709 [2024-12-05 17:17:39.951061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:05.709 [2024-12-05 17:17:39.951071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.390 ms 00:31:05.709 [2024-12-05 17:17:39.951077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.709 [2024-12-05 17:17:39.963200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.709 [2024-12-05 17:17:39.963230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:05.709 [2024-12-05 17:17:39.963240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.092 ms 00:31:05.709 [2024-12-05 17:17:39.963247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.709 [2024-12-05 17:17:39.963361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.709 [2024-12-05 17:17:39.963370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:05.709 [2024-12-05 17:17:39.963378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:31:05.709 [2024-12-05 17:17:39.963384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.709 [2024-12-05 17:17:39.981239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.710 [2024-12-05 17:17:39.981265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:05.710 [2024-12-05 17:17:39.981275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.838 ms 00:31:05.710 [2024-12-05 17:17:39.981281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.710 [2024-12-05 17:17:39.998606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.710 [2024-12-05 17:17:39.998631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:05.710 [2024-12-05 17:17:39.998640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.298 ms 00:31:05.710 [2024-12-05 17:17:39.998645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.710 [2024-12-05 17:17:40.016511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.710 [2024-12-05 17:17:40.016541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:05.710 [2024-12-05 17:17:40.016550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.835 ms 00:31:05.710 [2024-12-05 17:17:40.016556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.710 [2024-12-05 17:17:40.034266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.710 [2024-12-05 17:17:40.034303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:05.710 [2024-12-05 17:17:40.034313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.653 ms 00:31:05.710 [2024-12-05 17:17:40.034319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.710 [2024-12-05 17:17:40.034347] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:05.710 [2024-12-05 17:17:40.034358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:05.710 [2024-12-05 17:17:40.034882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:05.711 [2024-12-05 17:17:40.034888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:05.711 [2024-12-05 17:17:40.034896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:05.711 [2024-12-05 17:17:40.034902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:05.711 [2024-12-05 17:17:40.034909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:05.711 [2024-12-05 17:17:40.034915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:05.711 [2024-12-05 17:17:40.034922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:05.711 [2024-12-05 17:17:40.034928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:05.711 [2024-12-05 17:17:40.034935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:05.711 [2024-12-05 17:17:40.034941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:05.711 [2024-12-05 17:17:40.034958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:05.711 [2024-12-05 17:17:40.034965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:05.711 [2024-12-05 17:17:40.034973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:05.711 [2024-12-05 17:17:40.034979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:05.711 [2024-12-05 17:17:40.034986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:05.711 [2024-12-05 17:17:40.034993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:05.711 [2024-12-05 17:17:40.035000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:05.711 [2024-12-05 17:17:40.035006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:05.711 [2024-12-05 17:17:40.035017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:05.711 [2024-12-05 17:17:40.035023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:05.711 [2024-12-05 17:17:40.035032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:05.711 [2024-12-05 17:17:40.035038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:05.711 [2024-12-05 17:17:40.035045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:05.711 [2024-12-05 17:17:40.035051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:05.711 [2024-12-05 17:17:40.035058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:05.711 [2024-12-05 17:17:40.035071] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:05.711 [2024-12-05 17:17:40.035079] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f853cac7-bcba-40a7-941e-823694b449b9 00:31:05.711 [2024-12-05 17:17:40.035085] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:05.711 [2024-12-05 17:17:40.035094] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:05.711 [2024-12-05 17:17:40.035102] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:05.711 [2024-12-05 17:17:40.035109] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:05.711 [2024-12-05 17:17:40.035115] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:05.711 [2024-12-05 17:17:40.035122] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:05.711 [2024-12-05 17:17:40.035127] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:05.711 [2024-12-05 17:17:40.035134] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:05.711 [2024-12-05 17:17:40.035139] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:05.711 [2024-12-05 17:17:40.035146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.711 [2024-12-05 17:17:40.035152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:05.711 [2024-12-05 17:17:40.035160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.800 ms 00:31:05.711 [2024-12-05 17:17:40.035167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.711 [2024-12-05 17:17:40.044987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.711 [2024-12-05 17:17:40.045014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:05.711 [2024-12-05 17:17:40.045024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.794 ms 00:31:05.711 [2024-12-05 17:17:40.045030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.711 [2024-12-05 17:17:40.045314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:05.711 [2024-12-05 17:17:40.045331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:05.711 [2024-12-05 17:17:40.045341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:31:05.711 [2024-12-05 17:17:40.045347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.969 [2024-12-05 17:17:40.078490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.969 [2024-12-05 17:17:40.078524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:05.969 [2024-12-05 17:17:40.078535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.969 [2024-12-05 17:17:40.078541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.969 [2024-12-05 17:17:40.078588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.969 [2024-12-05 17:17:40.078595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:05.969 [2024-12-05 17:17:40.078604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.969 [2024-12-05 17:17:40.078610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.969 [2024-12-05 17:17:40.078678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.969 [2024-12-05 17:17:40.078685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:05.969 [2024-12-05 17:17:40.078693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.969 [2024-12-05 17:17:40.078699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.970 [2024-12-05 17:17:40.078715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.970 [2024-12-05 17:17:40.078721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:05.970 [2024-12-05 17:17:40.078728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.970 [2024-12-05 17:17:40.078735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.970 [2024-12-05 17:17:40.137155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.970 [2024-12-05 17:17:40.137200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:05.970 [2024-12-05 17:17:40.137210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.970 [2024-12-05 17:17:40.137216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.970 [2024-12-05 17:17:40.185231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.970 [2024-12-05 17:17:40.185265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:05.970 [2024-12-05 17:17:40.185274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.970 [2024-12-05 17:17:40.185282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.970 [2024-12-05 17:17:40.185335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.970 [2024-12-05 17:17:40.185343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:05.970 [2024-12-05 17:17:40.185350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.970 [2024-12-05 17:17:40.185356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.970 [2024-12-05 17:17:40.185404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.970 [2024-12-05 17:17:40.185411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:05.970 [2024-12-05 17:17:40.185418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.970 [2024-12-05 17:17:40.185424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.970 [2024-12-05 17:17:40.185498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.970 [2024-12-05 17:17:40.185506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:05.970 [2024-12-05 17:17:40.185513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.970 [2024-12-05 17:17:40.185518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.970 [2024-12-05 17:17:40.185543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.970 [2024-12-05 17:17:40.185549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:05.970 [2024-12-05 17:17:40.185556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.970 [2024-12-05 17:17:40.185562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.970 [2024-12-05 17:17:40.185594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.970 [2024-12-05 17:17:40.185600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:05.970 [2024-12-05 17:17:40.185608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.970 [2024-12-05 17:17:40.185614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.970 [2024-12-05 17:17:40.185649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:05.970 [2024-12-05 17:17:40.185657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:05.970 [2024-12-05 17:17:40.185664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:05.970 [2024-12-05 17:17:40.185670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:05.970 [2024-12-05 17:17:40.185770] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 262.849 ms, result 0 00:31:05.970 true 00:31:05.970 17:17:40 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 83608 00:31:05.970 17:17:40 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 83608 ']' 00:31:05.970 17:17:40 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 83608 00:31:05.970 17:17:40 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:31:05.970 17:17:40 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:05.970 17:17:40 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83608 00:31:05.970 17:17:40 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:05.970 17:17:40 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:05.970 killing process with pid 83608 00:31:05.970 17:17:40 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83608' 00:31:05.970 17:17:40 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 83608 00:31:05.970 17:17:40 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 83608 00:31:12.526 17:17:45 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:31:15.810 262144+0 records in 00:31:15.810 262144+0 records out 00:31:15.810 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.69258 s, 291 MB/s 00:31:15.810 17:17:49 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:17.186 17:17:51 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:17.186 [2024-12-05 17:17:51.304982] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:31:17.186 [2024-12-05 17:17:51.305073] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83829 ] 00:31:17.186 [2024-12-05 17:17:51.451815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:17.186 [2024-12-05 17:17:51.529066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.444 [2024-12-05 17:17:51.737238] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:17.444 [2024-12-05 17:17:51.737292] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:17.706 [2024-12-05 17:17:51.888253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.707 [2024-12-05 17:17:51.888291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:17.707 [2024-12-05 17:17:51.888302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:17.707 [2024-12-05 17:17:51.888308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.707 [2024-12-05 17:17:51.888342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.707 [2024-12-05 17:17:51.888351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:17.707 [2024-12-05 17:17:51.888357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:31:17.707 [2024-12-05 17:17:51.888363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.707 [2024-12-05 17:17:51.888376] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:17.707 [2024-12-05 17:17:51.888913] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:17.707 [2024-12-05 17:17:51.888930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.707 [2024-12-05 17:17:51.888936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:17.707 [2024-12-05 17:17:51.888942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:31:17.707 [2024-12-05 17:17:51.888957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.707 [2024-12-05 17:17:51.889879] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:17.707 [2024-12-05 17:17:51.899404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.707 [2024-12-05 17:17:51.899431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:17.707 [2024-12-05 17:17:51.899440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.526 ms 00:31:17.707 [2024-12-05 17:17:51.899446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.707 [2024-12-05 17:17:51.899489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.707 [2024-12-05 17:17:51.899497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:17.707 [2024-12-05 17:17:51.899504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:31:17.707 [2024-12-05 17:17:51.899509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.707 [2024-12-05 17:17:51.903740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.707 [2024-12-05 17:17:51.903763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:17.707 [2024-12-05 17:17:51.903770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.186 ms 00:31:17.707 [2024-12-05 17:17:51.903779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.707 [2024-12-05 17:17:51.903831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.707 [2024-12-05 17:17:51.903837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:17.707 [2024-12-05 17:17:51.903844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:31:17.707 [2024-12-05 17:17:51.903849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.707 [2024-12-05 17:17:51.903879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.707 [2024-12-05 17:17:51.903886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:17.707 [2024-12-05 17:17:51.903892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:17.707 [2024-12-05 17:17:51.903901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.707 [2024-12-05 17:17:51.903916] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:17.707 [2024-12-05 17:17:51.906443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.707 [2024-12-05 17:17:51.906466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:17.707 [2024-12-05 17:17:51.906475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.530 ms 00:31:17.707 [2024-12-05 17:17:51.906481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.707 [2024-12-05 17:17:51.906507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.707 [2024-12-05 17:17:51.906514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:17.707 [2024-12-05 17:17:51.906520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:17.707 [2024-12-05 17:17:51.906525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.707 [2024-12-05 17:17:51.906539] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:17.707 [2024-12-05 17:17:51.906554] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:17.707 [2024-12-05 17:17:51.906579] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:17.707 [2024-12-05 17:17:51.906592] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:17.707 [2024-12-05 17:17:51.906670] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:17.707 [2024-12-05 17:17:51.906678] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:17.707 [2024-12-05 17:17:51.906685] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:17.707 [2024-12-05 17:17:51.906693] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:17.707 [2024-12-05 17:17:51.906699] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:17.707 [2024-12-05 17:17:51.906706] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:17.707 [2024-12-05 17:17:51.906711] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:17.707 [2024-12-05 17:17:51.906718] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:17.707 [2024-12-05 17:17:51.906724] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:17.707 [2024-12-05 17:17:51.906730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.707 [2024-12-05 17:17:51.906736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:17.707 [2024-12-05 17:17:51.906742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:31:17.707 [2024-12-05 17:17:51.906747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.707 [2024-12-05 17:17:51.906810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.707 [2024-12-05 17:17:51.906816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:17.707 [2024-12-05 17:17:51.906822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:17.707 [2024-12-05 17:17:51.906827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.707 [2024-12-05 17:17:51.906902] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:17.707 [2024-12-05 17:17:51.906916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:17.707 [2024-12-05 17:17:51.906922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:17.707 [2024-12-05 17:17:51.906928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:17.707 [2024-12-05 17:17:51.906934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:17.707 [2024-12-05 17:17:51.906939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:17.707 [2024-12-05 17:17:51.906944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:17.707 [2024-12-05 17:17:51.906957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:17.707 [2024-12-05 17:17:51.906963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:17.707 [2024-12-05 17:17:51.906968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:17.707 [2024-12-05 17:17:51.906973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:17.707 [2024-12-05 17:17:51.906979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:17.707 [2024-12-05 17:17:51.906984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:17.707 [2024-12-05 17:17:51.906993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:17.707 [2024-12-05 17:17:51.906998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:17.707 [2024-12-05 17:17:51.907005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:17.707 [2024-12-05 17:17:51.907010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:17.707 [2024-12-05 17:17:51.907015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:17.707 [2024-12-05 17:17:51.907020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:17.707 [2024-12-05 17:17:51.907025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:17.707 [2024-12-05 17:17:51.907030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:17.707 [2024-12-05 17:17:51.907036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:17.707 [2024-12-05 17:17:51.907040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:17.707 [2024-12-05 17:17:51.907045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:17.707 [2024-12-05 17:17:51.907050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:17.707 [2024-12-05 17:17:51.907055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:17.707 [2024-12-05 17:17:51.907060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:17.707 [2024-12-05 17:17:51.907065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:17.707 [2024-12-05 17:17:51.907070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:17.707 [2024-12-05 17:17:51.907075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:17.707 [2024-12-05 17:17:51.907080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:17.707 [2024-12-05 17:17:51.907085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:17.707 [2024-12-05 17:17:51.907090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:17.707 [2024-12-05 17:17:51.907094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:17.707 [2024-12-05 17:17:51.907100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:17.707 [2024-12-05 17:17:51.907105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:17.707 [2024-12-05 17:17:51.907109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:17.707 [2024-12-05 17:17:51.907114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:17.707 [2024-12-05 17:17:51.907120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:17.707 [2024-12-05 17:17:51.907124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:17.708 [2024-12-05 17:17:51.907129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:17.708 [2024-12-05 17:17:51.907134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:17.708 [2024-12-05 17:17:51.907139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:17.708 [2024-12-05 17:17:51.907144] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:17.708 [2024-12-05 17:17:51.907150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:17.708 [2024-12-05 17:17:51.907155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:17.708 [2024-12-05 17:17:51.907160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:17.708 [2024-12-05 17:17:51.907166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:17.708 [2024-12-05 17:17:51.907172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:17.708 [2024-12-05 17:17:51.907177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:17.708 [2024-12-05 17:17:51.907182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:17.708 [2024-12-05 17:17:51.907187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:17.708 [2024-12-05 17:17:51.907192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:17.708 [2024-12-05 17:17:51.907197] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:17.708 [2024-12-05 17:17:51.907204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:17.708 [2024-12-05 17:17:51.907212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:17.708 [2024-12-05 17:17:51.907217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:17.708 [2024-12-05 17:17:51.907223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:17.708 [2024-12-05 17:17:51.907228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:17.708 [2024-12-05 17:17:51.907234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:17.708 [2024-12-05 17:17:51.907239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:17.708 [2024-12-05 17:17:51.907244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:17.708 [2024-12-05 17:17:51.907249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:17.708 [2024-12-05 17:17:51.907255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:17.708 [2024-12-05 17:17:51.907260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:17.708 [2024-12-05 17:17:51.907265] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:17.708 [2024-12-05 17:17:51.907271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:17.708 [2024-12-05 17:17:51.907276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:17.708 [2024-12-05 17:17:51.907281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:17.708 [2024-12-05 17:17:51.907286] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:17.708 [2024-12-05 17:17:51.907292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:17.708 [2024-12-05 17:17:51.907298] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:17.708 [2024-12-05 17:17:51.907304] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:17.708 [2024-12-05 17:17:51.907309] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:17.708 [2024-12-05 17:17:51.907314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:17.708 [2024-12-05 17:17:51.907320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.708 [2024-12-05 17:17:51.907325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:17.708 [2024-12-05 17:17:51.907330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:31:17.708 [2024-12-05 17:17:51.907335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.708 [2024-12-05 17:17:51.927821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.708 [2024-12-05 17:17:51.927847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:17.708 [2024-12-05 17:17:51.927855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.453 ms 00:31:17.708 [2024-12-05 17:17:51.927863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.708 [2024-12-05 17:17:51.927924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.708 [2024-12-05 17:17:51.927930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:17.708 [2024-12-05 17:17:51.927937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:31:17.708 [2024-12-05 17:17:51.927942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.708 [2024-12-05 17:17:51.973281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.708 [2024-12-05 17:17:51.973313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:17.708 [2024-12-05 17:17:51.973321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.285 ms 00:31:17.708 [2024-12-05 17:17:51.973328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.708 [2024-12-05 17:17:51.973357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.708 [2024-12-05 17:17:51.973364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:17.708 [2024-12-05 17:17:51.973374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:31:17.708 [2024-12-05 17:17:51.973380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.708 [2024-12-05 17:17:51.973680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.708 [2024-12-05 17:17:51.973700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:17.708 [2024-12-05 17:17:51.973707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:31:17.708 [2024-12-05 17:17:51.973713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.708 [2024-12-05 17:17:51.973810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.708 [2024-12-05 17:17:51.973819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:17.708 [2024-12-05 17:17:51.973825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:31:17.708 [2024-12-05 17:17:51.973834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.708 [2024-12-05 17:17:51.984440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.708 [2024-12-05 17:17:51.984465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:17.708 [2024-12-05 17:17:51.984474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.591 ms 00:31:17.708 [2024-12-05 17:17:51.984480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.708 [2024-12-05 17:17:51.994209] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:17.708 [2024-12-05 17:17:51.994240] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:17.708 [2024-12-05 17:17:51.994249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.708 [2024-12-05 17:17:51.994255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:17.708 [2024-12-05 17:17:51.994263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.692 ms 00:31:17.708 [2024-12-05 17:17:51.994269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.708 [2024-12-05 17:17:52.012780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.708 [2024-12-05 17:17:52.012812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:17.708 [2024-12-05 17:17:52.012821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.481 ms 00:31:17.708 [2024-12-05 17:17:52.012828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.708 [2024-12-05 17:17:52.021812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.708 [2024-12-05 17:17:52.021839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:17.708 [2024-12-05 17:17:52.021846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.955 ms 00:31:17.708 [2024-12-05 17:17:52.021852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.708 [2024-12-05 17:17:52.030492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.708 [2024-12-05 17:17:52.030518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:17.708 [2024-12-05 17:17:52.030525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.615 ms 00:31:17.708 [2024-12-05 17:17:52.030531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.708 [2024-12-05 17:17:52.031020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.708 [2024-12-05 17:17:52.031041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:17.708 [2024-12-05 17:17:52.031049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:31:17.708 [2024-12-05 17:17:52.031057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.970 [2024-12-05 17:17:52.077223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.970 [2024-12-05 17:17:52.077270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:17.970 [2024-12-05 17:17:52.077282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.151 ms 00:31:17.970 [2024-12-05 17:17:52.077293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.970 [2024-12-05 17:17:52.085305] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:17.970 [2024-12-05 17:17:52.087295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.970 [2024-12-05 17:17:52.087321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:17.970 [2024-12-05 17:17:52.087331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.953 ms 00:31:17.970 [2024-12-05 17:17:52.087338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.970 [2024-12-05 17:17:52.087406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.970 [2024-12-05 17:17:52.087416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:17.970 [2024-12-05 17:17:52.087424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:17.970 [2024-12-05 17:17:52.087431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.970 [2024-12-05 17:17:52.087499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.970 [2024-12-05 17:17:52.087515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:17.970 [2024-12-05 17:17:52.087523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:31:17.970 [2024-12-05 17:17:52.087529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.970 [2024-12-05 17:17:52.087544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.970 [2024-12-05 17:17:52.087550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:17.970 [2024-12-05 17:17:52.087557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:17.970 [2024-12-05 17:17:52.087562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.970 [2024-12-05 17:17:52.087587] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:17.970 [2024-12-05 17:17:52.087596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.970 [2024-12-05 17:17:52.087601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:17.970 [2024-12-05 17:17:52.087607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:17.970 [2024-12-05 17:17:52.087613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.970 [2024-12-05 17:17:52.105517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.970 [2024-12-05 17:17:52.105547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:17.970 [2024-12-05 17:17:52.105556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.891 ms 00:31:17.970 [2024-12-05 17:17:52.105565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.970 [2024-12-05 17:17:52.105618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:17.970 [2024-12-05 17:17:52.105625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:17.970 [2024-12-05 17:17:52.105632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:31:17.970 [2024-12-05 17:17:52.105638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:17.970 [2024-12-05 17:17:52.106366] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 217.783 ms, result 0 00:31:18.930  [2024-12-05T17:17:54.313Z] Copying: 10/1024 [MB] (10 MBps) [2024-12-05T17:17:55.257Z] Copying: 22/1024 [MB] (11 MBps) [2024-12-05T17:17:56.199Z] Copying: 34/1024 [MB] (11 MBps) [2024-12-05T17:17:57.140Z] Copying: 48/1024 [MB] (14 MBps) [2024-12-05T17:17:58.525Z] Copying: 66/1024 [MB] (17 MBps) [2024-12-05T17:17:59.462Z] Copying: 82/1024 [MB] (15 MBps) [2024-12-05T17:18:00.405Z] Copying: 96/1024 [MB] (14 MBps) [2024-12-05T17:18:01.349Z] Copying: 108/1024 [MB] (12 MBps) [2024-12-05T17:18:02.315Z] Copying: 126/1024 [MB] (17 MBps) [2024-12-05T17:18:03.258Z] Copying: 149/1024 [MB] (22 MBps) [2024-12-05T17:18:04.195Z] Copying: 167/1024 [MB] (17 MBps) [2024-12-05T17:18:05.139Z] Copying: 212/1024 [MB] (45 MBps) [2024-12-05T17:18:06.526Z] Copying: 248/1024 [MB] (36 MBps) [2024-12-05T17:18:07.469Z] Copying: 265/1024 [MB] (17 MBps) [2024-12-05T17:18:08.408Z] Copying: 280/1024 [MB] (14 MBps) [2024-12-05T17:18:09.355Z] Copying: 298/1024 [MB] (18 MBps) [2024-12-05T17:18:10.298Z] Copying: 313/1024 [MB] (15 MBps) [2024-12-05T17:18:11.240Z] Copying: 324/1024 [MB] (11 MBps) [2024-12-05T17:18:12.174Z] Copying: 336/1024 [MB] (11 MBps) [2024-12-05T17:18:13.559Z] Copying: 352/1024 [MB] (15 MBps) [2024-12-05T17:18:14.132Z] Copying: 365/1024 [MB] (13 MBps) [2024-12-05T17:18:15.516Z] Copying: 381/1024 [MB] (16 MBps) [2024-12-05T17:18:16.459Z] Copying: 399/1024 [MB] (17 MBps) [2024-12-05T17:18:17.401Z] Copying: 429/1024 [MB] (30 MBps) [2024-12-05T17:18:18.370Z] Copying: 461/1024 [MB] (31 MBps) [2024-12-05T17:18:19.315Z] Copying: 472/1024 [MB] (11 MBps) [2024-12-05T17:18:20.250Z] Copying: 482/1024 [MB] (10 MBps) [2024-12-05T17:18:21.184Z] Copying: 498/1024 [MB] (16 MBps) [2024-12-05T17:18:22.559Z] Copying: 522/1024 [MB] (23 MBps) [2024-12-05T17:18:23.126Z] Copying: 553/1024 [MB] (31 MBps) [2024-12-05T17:18:24.512Z] Copying: 580/1024 [MB] (27 MBps) [2024-12-05T17:18:25.454Z] Copying: 598/1024 [MB] (17 MBps) [2024-12-05T17:18:26.465Z] Copying: 611/1024 [MB] (13 MBps) [2024-12-05T17:18:27.404Z] Copying: 631/1024 [MB] (20 MBps) [2024-12-05T17:18:28.343Z] Copying: 643/1024 [MB] (12 MBps) [2024-12-05T17:18:29.285Z] Copying: 658/1024 [MB] (14 MBps) [2024-12-05T17:18:30.229Z] Copying: 677/1024 [MB] (19 MBps) [2024-12-05T17:18:31.171Z] Copying: 693/1024 [MB] (15 MBps) [2024-12-05T17:18:32.556Z] Copying: 719/1024 [MB] (26 MBps) [2024-12-05T17:18:33.137Z] Copying: 741/1024 [MB] (21 MBps) [2024-12-05T17:18:34.526Z] Copying: 761/1024 [MB] (20 MBps) [2024-12-05T17:18:35.470Z] Copying: 779/1024 [MB] (18 MBps) [2024-12-05T17:18:36.412Z] Copying: 796/1024 [MB] (16 MBps) [2024-12-05T17:18:37.352Z] Copying: 806/1024 [MB] (10 MBps) [2024-12-05T17:18:38.292Z] Copying: 822/1024 [MB] (15 MBps) [2024-12-05T17:18:39.236Z] Copying: 844/1024 [MB] (22 MBps) [2024-12-05T17:18:40.182Z] Copying: 858/1024 [MB] (14 MBps) [2024-12-05T17:18:41.125Z] Copying: 873/1024 [MB] (14 MBps) [2024-12-05T17:18:42.511Z] Copying: 887/1024 [MB] (14 MBps) [2024-12-05T17:18:43.454Z] Copying: 901/1024 [MB] (13 MBps) [2024-12-05T17:18:44.398Z] Copying: 921/1024 [MB] (20 MBps) [2024-12-05T17:18:45.341Z] Copying: 946/1024 [MB] (24 MBps) [2024-12-05T17:18:46.280Z] Copying: 966/1024 [MB] (20 MBps) [2024-12-05T17:18:47.221Z] Copying: 995/1024 [MB] (29 MBps) [2024-12-05T17:18:47.483Z] Copying: 1020/1024 [MB] (25 MBps) [2024-12-05T17:18:47.483Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-05 17:18:47.329228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.116 [2024-12-05 17:18:47.329279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:13.116 [2024-12-05 17:18:47.329295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:13.116 [2024-12-05 17:18:47.329305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.116 [2024-12-05 17:18:47.329327] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:13.116 [2024-12-05 17:18:47.332388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.116 [2024-12-05 17:18:47.332426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:13.116 [2024-12-05 17:18:47.332446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.043 ms 00:32:13.116 [2024-12-05 17:18:47.332455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.116 [2024-12-05 17:18:47.335727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.116 [2024-12-05 17:18:47.335770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:13.116 [2024-12-05 17:18:47.335781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.243 ms 00:32:13.116 [2024-12-05 17:18:47.335789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.116 [2024-12-05 17:18:47.335817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.116 [2024-12-05 17:18:47.335826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:13.116 [2024-12-05 17:18:47.335834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:13.116 [2024-12-05 17:18:47.335843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.116 [2024-12-05 17:18:47.335902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.116 [2024-12-05 17:18:47.335912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:13.116 [2024-12-05 17:18:47.335920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:32:13.116 [2024-12-05 17:18:47.335928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.116 [2024-12-05 17:18:47.335942] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:13.116 [2024-12-05 17:18:47.335971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.335982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.335990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.335998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.336006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.336013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.336020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.336028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.336035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.336043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.336050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.336057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.336065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.336072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.336079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.336086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.336093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.336101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.336108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.336116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.336125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.336133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.336141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.336148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.336156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.336163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.336171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.336179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:13.116 [2024-12-05 17:18:47.336186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:13.117 [2024-12-05 17:18:47.336761] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:13.117 [2024-12-05 17:18:47.336769] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f853cac7-bcba-40a7-941e-823694b449b9 00:32:13.117 [2024-12-05 17:18:47.336777] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:13.117 [2024-12-05 17:18:47.336784] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:13.117 [2024-12-05 17:18:47.336791] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:13.117 [2024-12-05 17:18:47.336806] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:13.117 [2024-12-05 17:18:47.336813] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:13.117 [2024-12-05 17:18:47.336821] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:13.117 [2024-12-05 17:18:47.336828] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:13.117 [2024-12-05 17:18:47.336834] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:13.117 [2024-12-05 17:18:47.336841] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:13.117 [2024-12-05 17:18:47.336848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.117 [2024-12-05 17:18:47.336856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:13.117 [2024-12-05 17:18:47.336863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.907 ms 00:32:13.117 [2024-12-05 17:18:47.336871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.117 [2024-12-05 17:18:47.350437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.117 [2024-12-05 17:18:47.350486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:13.117 [2024-12-05 17:18:47.350497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.548 ms 00:32:13.117 [2024-12-05 17:18:47.350505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.117 [2024-12-05 17:18:47.350898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:13.118 [2024-12-05 17:18:47.350915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:13.118 [2024-12-05 17:18:47.350925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:32:13.118 [2024-12-05 17:18:47.350932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.118 [2024-12-05 17:18:47.387411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.118 [2024-12-05 17:18:47.387456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:13.118 [2024-12-05 17:18:47.387467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.118 [2024-12-05 17:18:47.387475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.118 [2024-12-05 17:18:47.387544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.118 [2024-12-05 17:18:47.387553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:13.118 [2024-12-05 17:18:47.387561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.118 [2024-12-05 17:18:47.387569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.118 [2024-12-05 17:18:47.387631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.118 [2024-12-05 17:18:47.387648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:13.118 [2024-12-05 17:18:47.387656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.118 [2024-12-05 17:18:47.387665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.118 [2024-12-05 17:18:47.387681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.118 [2024-12-05 17:18:47.387689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:13.118 [2024-12-05 17:18:47.387702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.118 [2024-12-05 17:18:47.387710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.118 [2024-12-05 17:18:47.472106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.118 [2024-12-05 17:18:47.472167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:13.118 [2024-12-05 17:18:47.472180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.118 [2024-12-05 17:18:47.472188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.378 [2024-12-05 17:18:47.540944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.378 [2024-12-05 17:18:47.541012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:13.378 [2024-12-05 17:18:47.541025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.378 [2024-12-05 17:18:47.541034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.378 [2024-12-05 17:18:47.541133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.378 [2024-12-05 17:18:47.541145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:13.378 [2024-12-05 17:18:47.541158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.378 [2024-12-05 17:18:47.541167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.378 [2024-12-05 17:18:47.541206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.378 [2024-12-05 17:18:47.541216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:13.378 [2024-12-05 17:18:47.541225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.378 [2024-12-05 17:18:47.541233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.378 [2024-12-05 17:18:47.541313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.378 [2024-12-05 17:18:47.541323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:13.378 [2024-12-05 17:18:47.541341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.378 [2024-12-05 17:18:47.541351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.378 [2024-12-05 17:18:47.541378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.378 [2024-12-05 17:18:47.541387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:13.378 [2024-12-05 17:18:47.541396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.378 [2024-12-05 17:18:47.541404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.378 [2024-12-05 17:18:47.541443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.378 [2024-12-05 17:18:47.541451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:13.378 [2024-12-05 17:18:47.541460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.378 [2024-12-05 17:18:47.541471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.378 [2024-12-05 17:18:47.541520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:13.378 [2024-12-05 17:18:47.541529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:13.378 [2024-12-05 17:18:47.541539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:13.378 [2024-12-05 17:18:47.541547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:13.378 [2024-12-05 17:18:47.541684] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 212.414 ms, result 0 00:32:14.762 00:32:14.762 00:32:14.762 17:18:48 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:32:14.762 [2024-12-05 17:18:48.793128] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:32:14.762 [2024-12-05 17:18:48.793246] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84444 ] 00:32:14.762 [2024-12-05 17:18:48.946487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:14.762 [2024-12-05 17:18:49.027774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.023 [2024-12-05 17:18:49.238527] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:15.023 [2024-12-05 17:18:49.238580] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:15.286 [2024-12-05 17:18:49.390139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.286 [2024-12-05 17:18:49.390177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:15.286 [2024-12-05 17:18:49.390188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:15.286 [2024-12-05 17:18:49.390194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.286 [2024-12-05 17:18:49.390226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.286 [2024-12-05 17:18:49.390235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:15.286 [2024-12-05 17:18:49.390241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:32:15.286 [2024-12-05 17:18:49.390247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.286 [2024-12-05 17:18:49.390260] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:15.286 [2024-12-05 17:18:49.390809] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:15.286 [2024-12-05 17:18:49.390822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.286 [2024-12-05 17:18:49.390828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:15.286 [2024-12-05 17:18:49.390834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:32:15.286 [2024-12-05 17:18:49.390840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.286 [2024-12-05 17:18:49.391055] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:15.286 [2024-12-05 17:18:49.391072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.286 [2024-12-05 17:18:49.391080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:15.286 [2024-12-05 17:18:49.391086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:32:15.286 [2024-12-05 17:18:49.391092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.286 [2024-12-05 17:18:49.391124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.286 [2024-12-05 17:18:49.391130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:15.286 [2024-12-05 17:18:49.391136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:32:15.286 [2024-12-05 17:18:49.391142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.286 [2024-12-05 17:18:49.391415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.286 [2024-12-05 17:18:49.391423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:15.286 [2024-12-05 17:18:49.391429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:32:15.286 [2024-12-05 17:18:49.391434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.286 [2024-12-05 17:18:49.391482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.286 [2024-12-05 17:18:49.391489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:15.286 [2024-12-05 17:18:49.391494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:32:15.286 [2024-12-05 17:18:49.391499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.286 [2024-12-05 17:18:49.391515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.286 [2024-12-05 17:18:49.391521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:15.286 [2024-12-05 17:18:49.391528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:15.286 [2024-12-05 17:18:49.391534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.286 [2024-12-05 17:18:49.391546] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:15.286 [2024-12-05 17:18:49.394368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.286 [2024-12-05 17:18:49.394394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:15.286 [2024-12-05 17:18:49.394401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.825 ms 00:32:15.286 [2024-12-05 17:18:49.394407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.286 [2024-12-05 17:18:49.394432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.286 [2024-12-05 17:18:49.394439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:15.286 [2024-12-05 17:18:49.394445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:32:15.286 [2024-12-05 17:18:49.394450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.286 [2024-12-05 17:18:49.394480] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:15.286 [2024-12-05 17:18:49.394496] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:15.286 [2024-12-05 17:18:49.394525] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:15.286 [2024-12-05 17:18:49.394536] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:15.286 [2024-12-05 17:18:49.394615] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:15.286 [2024-12-05 17:18:49.394622] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:15.286 [2024-12-05 17:18:49.394630] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:15.286 [2024-12-05 17:18:49.394637] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:15.286 [2024-12-05 17:18:49.394644] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:15.286 [2024-12-05 17:18:49.394652] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:15.286 [2024-12-05 17:18:49.394657] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:15.286 [2024-12-05 17:18:49.394663] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:15.286 [2024-12-05 17:18:49.394668] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:15.286 [2024-12-05 17:18:49.394674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.286 [2024-12-05 17:18:49.394679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:15.286 [2024-12-05 17:18:49.394685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:32:15.286 [2024-12-05 17:18:49.394690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.286 [2024-12-05 17:18:49.394752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.286 [2024-12-05 17:18:49.394759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:15.286 [2024-12-05 17:18:49.394764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:32:15.286 [2024-12-05 17:18:49.394771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.286 [2024-12-05 17:18:49.394845] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:15.286 [2024-12-05 17:18:49.394852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:15.286 [2024-12-05 17:18:49.394858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:15.286 [2024-12-05 17:18:49.394864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:15.286 [2024-12-05 17:18:49.394870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:15.286 [2024-12-05 17:18:49.394874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:15.286 [2024-12-05 17:18:49.394880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:15.286 [2024-12-05 17:18:49.394885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:15.286 [2024-12-05 17:18:49.394890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:15.286 [2024-12-05 17:18:49.394894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:15.286 [2024-12-05 17:18:49.394899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:15.286 [2024-12-05 17:18:49.394907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:15.286 [2024-12-05 17:18:49.394913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:15.286 [2024-12-05 17:18:49.394918] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:15.286 [2024-12-05 17:18:49.394923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:15.286 [2024-12-05 17:18:49.394931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:15.287 [2024-12-05 17:18:49.394936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:15.287 [2024-12-05 17:18:49.394941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:15.287 [2024-12-05 17:18:49.394946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:15.287 [2024-12-05 17:18:49.394961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:15.287 [2024-12-05 17:18:49.394967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:15.287 [2024-12-05 17:18:49.394972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:15.287 [2024-12-05 17:18:49.394977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:15.287 [2024-12-05 17:18:49.394982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:15.287 [2024-12-05 17:18:49.394987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:15.287 [2024-12-05 17:18:49.394993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:15.287 [2024-12-05 17:18:49.394997] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:15.287 [2024-12-05 17:18:49.395002] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:15.287 [2024-12-05 17:18:49.395007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:15.287 [2024-12-05 17:18:49.395012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:15.287 [2024-12-05 17:18:49.395017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:15.287 [2024-12-05 17:18:49.395022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:15.287 [2024-12-05 17:18:49.395027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:15.287 [2024-12-05 17:18:49.395033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:15.287 [2024-12-05 17:18:49.395038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:15.287 [2024-12-05 17:18:49.395043] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:15.287 [2024-12-05 17:18:49.395048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:15.287 [2024-12-05 17:18:49.395053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:15.287 [2024-12-05 17:18:49.395058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:15.287 [2024-12-05 17:18:49.395063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:15.287 [2024-12-05 17:18:49.395068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:15.287 [2024-12-05 17:18:49.395073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:15.287 [2024-12-05 17:18:49.395078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:15.287 [2024-12-05 17:18:49.395084] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:15.287 [2024-12-05 17:18:49.395090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:15.287 [2024-12-05 17:18:49.395096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:15.287 [2024-12-05 17:18:49.395101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:15.287 [2024-12-05 17:18:49.395108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:15.287 [2024-12-05 17:18:49.395114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:15.287 [2024-12-05 17:18:49.395119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:15.287 [2024-12-05 17:18:49.395124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:15.287 [2024-12-05 17:18:49.395128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:15.287 [2024-12-05 17:18:49.395133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:15.287 [2024-12-05 17:18:49.395140] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:15.287 [2024-12-05 17:18:49.395146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:15.287 [2024-12-05 17:18:49.395152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:15.287 [2024-12-05 17:18:49.395158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:15.287 [2024-12-05 17:18:49.395163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:15.287 [2024-12-05 17:18:49.395168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:15.287 [2024-12-05 17:18:49.395174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:15.287 [2024-12-05 17:18:49.395179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:15.287 [2024-12-05 17:18:49.395185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:15.287 [2024-12-05 17:18:49.395190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:15.287 [2024-12-05 17:18:49.395195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:15.287 [2024-12-05 17:18:49.395201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:15.287 [2024-12-05 17:18:49.395206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:15.287 [2024-12-05 17:18:49.395212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:15.287 [2024-12-05 17:18:49.395217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:15.287 [2024-12-05 17:18:49.395222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:15.287 [2024-12-05 17:18:49.395227] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:15.287 [2024-12-05 17:18:49.395233] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:15.287 [2024-12-05 17:18:49.395239] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:15.287 [2024-12-05 17:18:49.395245] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:15.287 [2024-12-05 17:18:49.395250] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:15.287 [2024-12-05 17:18:49.395255] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:15.287 [2024-12-05 17:18:49.395262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.287 [2024-12-05 17:18:49.395268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:15.287 [2024-12-05 17:18:49.395273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:32:15.287 [2024-12-05 17:18:49.395279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.287 [2024-12-05 17:18:49.413804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.287 [2024-12-05 17:18:49.413831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:15.287 [2024-12-05 17:18:49.413839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.495 ms 00:32:15.287 [2024-12-05 17:18:49.413845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.287 [2024-12-05 17:18:49.413907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.287 [2024-12-05 17:18:49.413917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:15.287 [2024-12-05 17:18:49.413925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:32:15.287 [2024-12-05 17:18:49.413930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.287 [2024-12-05 17:18:49.452721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.287 [2024-12-05 17:18:49.452754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:15.287 [2024-12-05 17:18:49.452763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.745 ms 00:32:15.287 [2024-12-05 17:18:49.452769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.287 [2024-12-05 17:18:49.452802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.287 [2024-12-05 17:18:49.452809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:15.287 [2024-12-05 17:18:49.452815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:15.287 [2024-12-05 17:18:49.452821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.287 [2024-12-05 17:18:49.452891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.287 [2024-12-05 17:18:49.452899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:15.287 [2024-12-05 17:18:49.452906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:32:15.287 [2024-12-05 17:18:49.452911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.287 [2024-12-05 17:18:49.453008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.287 [2024-12-05 17:18:49.453017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:15.287 [2024-12-05 17:18:49.453023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:32:15.287 [2024-12-05 17:18:49.453028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.287 [2024-12-05 17:18:49.463432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.287 [2024-12-05 17:18:49.463458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:15.287 [2024-12-05 17:18:49.463465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.390 ms 00:32:15.287 [2024-12-05 17:18:49.463472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.287 [2024-12-05 17:18:49.463553] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:15.287 [2024-12-05 17:18:49.463563] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:15.287 [2024-12-05 17:18:49.463570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.287 [2024-12-05 17:18:49.463577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:15.287 [2024-12-05 17:18:49.463583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:32:15.287 [2024-12-05 17:18:49.463589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.287 [2024-12-05 17:18:49.472726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.287 [2024-12-05 17:18:49.472750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:15.287 [2024-12-05 17:18:49.472758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.126 ms 00:32:15.287 [2024-12-05 17:18:49.472765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.288 [2024-12-05 17:18:49.472853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.288 [2024-12-05 17:18:49.472859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:15.288 [2024-12-05 17:18:49.472865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:32:15.288 [2024-12-05 17:18:49.472873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.288 [2024-12-05 17:18:49.472897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.288 [2024-12-05 17:18:49.472904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:15.288 [2024-12-05 17:18:49.472914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:32:15.288 [2024-12-05 17:18:49.472920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.288 [2024-12-05 17:18:49.473351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.288 [2024-12-05 17:18:49.473371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:15.288 [2024-12-05 17:18:49.473378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:32:15.288 [2024-12-05 17:18:49.473383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.288 [2024-12-05 17:18:49.473396] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:15.288 [2024-12-05 17:18:49.473403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.288 [2024-12-05 17:18:49.473409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:15.288 [2024-12-05 17:18:49.473415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:15.288 [2024-12-05 17:18:49.473421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.288 [2024-12-05 17:18:49.481913] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:15.288 [2024-12-05 17:18:49.482028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.288 [2024-12-05 17:18:49.482041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:15.288 [2024-12-05 17:18:49.482047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.593 ms 00:32:15.288 [2024-12-05 17:18:49.482053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.288 [2024-12-05 17:18:49.483645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.288 [2024-12-05 17:18:49.483665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:15.288 [2024-12-05 17:18:49.483672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.578 ms 00:32:15.288 [2024-12-05 17:18:49.483678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.288 [2024-12-05 17:18:49.483746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.288 [2024-12-05 17:18:49.483754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:15.288 [2024-12-05 17:18:49.483760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:32:15.288 [2024-12-05 17:18:49.483766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.288 [2024-12-05 17:18:49.483781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.288 [2024-12-05 17:18:49.483790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:15.288 [2024-12-05 17:18:49.483795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:32:15.288 [2024-12-05 17:18:49.483801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.288 [2024-12-05 17:18:49.483821] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:15.288 [2024-12-05 17:18:49.483828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.288 [2024-12-05 17:18:49.483834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:15.288 [2024-12-05 17:18:49.483839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:32:15.288 [2024-12-05 17:18:49.483844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.288 [2024-12-05 17:18:49.501957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.288 [2024-12-05 17:18:49.501985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:15.288 [2024-12-05 17:18:49.501993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.100 ms 00:32:15.288 [2024-12-05 17:18:49.502000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.288 [2024-12-05 17:18:49.502051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:15.288 [2024-12-05 17:18:49.502059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:15.288 [2024-12-05 17:18:49.502065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:32:15.288 [2024-12-05 17:18:49.502070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:15.288 [2024-12-05 17:18:49.502834] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 112.392 ms, result 0 00:32:16.674  [2024-12-05T17:18:51.985Z] Copying: 21/1024 [MB] (21 MBps) [2024-12-05T17:18:52.927Z] Copying: 44/1024 [MB] (23 MBps) [2024-12-05T17:18:53.870Z] Copying: 64/1024 [MB] (19 MBps) [2024-12-05T17:18:54.811Z] Copying: 83/1024 [MB] (18 MBps) [2024-12-05T17:18:55.753Z] Copying: 105/1024 [MB] (22 MBps) [2024-12-05T17:18:56.712Z] Copying: 120/1024 [MB] (15 MBps) [2024-12-05T17:18:57.703Z] Copying: 138/1024 [MB] (17 MBps) [2024-12-05T17:18:58.647Z] Copying: 155/1024 [MB] (17 MBps) [2024-12-05T17:19:00.036Z] Copying: 170/1024 [MB] (14 MBps) [2024-12-05T17:19:00.981Z] Copying: 186/1024 [MB] (15 MBps) [2024-12-05T17:19:01.921Z] Copying: 202/1024 [MB] (16 MBps) [2024-12-05T17:19:02.864Z] Copying: 222/1024 [MB] (19 MBps) [2024-12-05T17:19:03.808Z] Copying: 239/1024 [MB] (17 MBps) [2024-12-05T17:19:04.751Z] Copying: 260/1024 [MB] (20 MBps) [2024-12-05T17:19:05.702Z] Copying: 282/1024 [MB] (22 MBps) [2024-12-05T17:19:06.644Z] Copying: 306/1024 [MB] (23 MBps) [2024-12-05T17:19:08.029Z] Copying: 329/1024 [MB] (22 MBps) [2024-12-05T17:19:08.972Z] Copying: 346/1024 [MB] (17 MBps) [2024-12-05T17:19:09.916Z] Copying: 371/1024 [MB] (24 MBps) [2024-12-05T17:19:10.860Z] Copying: 390/1024 [MB] (19 MBps) [2024-12-05T17:19:11.801Z] Copying: 401/1024 [MB] (10 MBps) [2024-12-05T17:19:12.743Z] Copying: 424/1024 [MB] (23 MBps) [2024-12-05T17:19:13.687Z] Copying: 444/1024 [MB] (19 MBps) [2024-12-05T17:19:15.077Z] Copying: 468/1024 [MB] (23 MBps) [2024-12-05T17:19:15.646Z] Copying: 491/1024 [MB] (23 MBps) [2024-12-05T17:19:17.033Z] Copying: 502/1024 [MB] (11 MBps) [2024-12-05T17:19:17.977Z] Copying: 514/1024 [MB] (11 MBps) [2024-12-05T17:19:18.921Z] Copying: 537/1024 [MB] (22 MBps) [2024-12-05T17:19:19.868Z] Copying: 560/1024 [MB] (22 MBps) [2024-12-05T17:19:20.814Z] Copying: 581/1024 [MB] (21 MBps) [2024-12-05T17:19:21.759Z] Copying: 603/1024 [MB] (21 MBps) [2024-12-05T17:19:22.704Z] Copying: 622/1024 [MB] (19 MBps) [2024-12-05T17:19:23.646Z] Copying: 635/1024 [MB] (13 MBps) [2024-12-05T17:19:25.021Z] Copying: 652/1024 [MB] (17 MBps) [2024-12-05T17:19:25.637Z] Copying: 676/1024 [MB] (23 MBps) [2024-12-05T17:19:27.024Z] Copying: 693/1024 [MB] (17 MBps) [2024-12-05T17:19:27.965Z] Copying: 712/1024 [MB] (18 MBps) [2024-12-05T17:19:28.955Z] Copying: 730/1024 [MB] (18 MBps) [2024-12-05T17:19:29.985Z] Copying: 750/1024 [MB] (20 MBps) [2024-12-05T17:19:30.929Z] Copying: 761/1024 [MB] (11 MBps) [2024-12-05T17:19:31.872Z] Copying: 775/1024 [MB] (13 MBps) [2024-12-05T17:19:32.812Z] Copying: 785/1024 [MB] (10 MBps) [2024-12-05T17:19:33.753Z] Copying: 796/1024 [MB] (10 MBps) [2024-12-05T17:19:34.695Z] Copying: 809/1024 [MB] (12 MBps) [2024-12-05T17:19:36.069Z] Copying: 820/1024 [MB] (11 MBps) [2024-12-05T17:19:37.005Z] Copying: 837/1024 [MB] (17 MBps) [2024-12-05T17:19:37.949Z] Copying: 853/1024 [MB] (15 MBps) [2024-12-05T17:19:38.892Z] Copying: 870/1024 [MB] (17 MBps) [2024-12-05T17:19:39.830Z] Copying: 892/1024 [MB] (21 MBps) [2024-12-05T17:19:40.774Z] Copying: 911/1024 [MB] (18 MBps) [2024-12-05T17:19:41.717Z] Copying: 925/1024 [MB] (14 MBps) [2024-12-05T17:19:42.657Z] Copying: 939/1024 [MB] (14 MBps) [2024-12-05T17:19:44.042Z] Copying: 958/1024 [MB] (18 MBps) [2024-12-05T17:19:44.985Z] Copying: 973/1024 [MB] (14 MBps) [2024-12-05T17:19:45.926Z] Copying: 993/1024 [MB] (20 MBps) [2024-12-05T17:19:46.495Z] Copying: 1008/1024 [MB] (14 MBps) [2024-12-05T17:19:46.757Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-05 17:19:46.691906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.390 [2024-12-05 17:19:46.691987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:12.390 [2024-12-05 17:19:46.692002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:12.390 [2024-12-05 17:19:46.692010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.390 [2024-12-05 17:19:46.692047] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:12.390 [2024-12-05 17:19:46.697902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.390 [2024-12-05 17:19:46.697964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:12.390 [2024-12-05 17:19:46.697984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.838 ms 00:33:12.390 [2024-12-05 17:19:46.697998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.390 [2024-12-05 17:19:46.698409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.390 [2024-12-05 17:19:46.698436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:12.390 [2024-12-05 17:19:46.698452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.374 ms 00:33:12.390 [2024-12-05 17:19:46.698477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.390 [2024-12-05 17:19:46.698529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.390 [2024-12-05 17:19:46.698545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:12.390 [2024-12-05 17:19:46.698560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:12.390 [2024-12-05 17:19:46.698573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.390 [2024-12-05 17:19:46.698648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.390 [2024-12-05 17:19:46.698674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:12.390 [2024-12-05 17:19:46.698689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:33:12.390 [2024-12-05 17:19:46.698704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.390 [2024-12-05 17:19:46.698728] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:12.390 [2024-12-05 17:19:46.698750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.698773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.698787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.698802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.698816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.698830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.698844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.698858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.698872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.698887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.698900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.698916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.698932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.698947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.698980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.698994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.699008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.699023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.699037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.699051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.699065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.699080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.699094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.699109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.699123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.699138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.699152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.699166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.699180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.699194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.699208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.699222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.699236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.699251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.699266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.699282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:12.390 [2024-12-05 17:19:46.699296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.699990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.700004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.700018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.700032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.700046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.700060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.700075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.700090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.700104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.700118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.700132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.700146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.700161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.700175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.700189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.700202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.700216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:12.391 [2024-12-05 17:19:46.700245] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:12.391 [2024-12-05 17:19:46.700259] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f853cac7-bcba-40a7-941e-823694b449b9 00:33:12.391 [2024-12-05 17:19:46.700273] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:12.391 [2024-12-05 17:19:46.700287] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:33:12.391 [2024-12-05 17:19:46.700301] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:12.391 [2024-12-05 17:19:46.700315] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:12.391 [2024-12-05 17:19:46.700328] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:12.391 [2024-12-05 17:19:46.700342] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:12.391 [2024-12-05 17:19:46.700362] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:12.391 [2024-12-05 17:19:46.700375] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:12.391 [2024-12-05 17:19:46.700387] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:12.391 [2024-12-05 17:19:46.700401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.391 [2024-12-05 17:19:46.700415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:12.391 [2024-12-05 17:19:46.700429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.673 ms 00:33:12.391 [2024-12-05 17:19:46.700446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.391 [2024-12-05 17:19:46.715171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.391 [2024-12-05 17:19:46.715201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:12.391 [2024-12-05 17:19:46.715211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.700 ms 00:33:12.391 [2024-12-05 17:19:46.715218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.391 [2024-12-05 17:19:46.715569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:12.391 [2024-12-05 17:19:46.715579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:12.391 [2024-12-05 17:19:46.715593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.332 ms 00:33:12.391 [2024-12-05 17:19:46.715600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.391 [2024-12-05 17:19:46.750036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.391 [2024-12-05 17:19:46.750069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:12.391 [2024-12-05 17:19:46.750078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.391 [2024-12-05 17:19:46.750086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.391 [2024-12-05 17:19:46.750146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.391 [2024-12-05 17:19:46.750154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:12.391 [2024-12-05 17:19:46.750166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.391 [2024-12-05 17:19:46.750174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.391 [2024-12-05 17:19:46.750229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.391 [2024-12-05 17:19:46.750240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:12.392 [2024-12-05 17:19:46.750248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.392 [2024-12-05 17:19:46.750257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.392 [2024-12-05 17:19:46.750272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.392 [2024-12-05 17:19:46.750280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:12.392 [2024-12-05 17:19:46.750287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.392 [2024-12-05 17:19:46.750297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.651 [2024-12-05 17:19:46.833357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.651 [2024-12-05 17:19:46.833410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:12.651 [2024-12-05 17:19:46.833424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.651 [2024-12-05 17:19:46.833432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.651 [2024-12-05 17:19:46.903001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.651 [2024-12-05 17:19:46.903051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:12.651 [2024-12-05 17:19:46.903064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.651 [2024-12-05 17:19:46.903079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.651 [2024-12-05 17:19:46.903161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.651 [2024-12-05 17:19:46.903171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:12.651 [2024-12-05 17:19:46.903181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.651 [2024-12-05 17:19:46.903189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.651 [2024-12-05 17:19:46.903234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.651 [2024-12-05 17:19:46.903244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:12.651 [2024-12-05 17:19:46.903253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.651 [2024-12-05 17:19:46.903262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.651 [2024-12-05 17:19:46.903478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.651 [2024-12-05 17:19:46.903489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:12.651 [2024-12-05 17:19:46.903498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.651 [2024-12-05 17:19:46.903506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.651 [2024-12-05 17:19:46.903532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.651 [2024-12-05 17:19:46.903541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:12.651 [2024-12-05 17:19:46.903549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.651 [2024-12-05 17:19:46.903558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.651 [2024-12-05 17:19:46.903600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.651 [2024-12-05 17:19:46.903611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:12.651 [2024-12-05 17:19:46.903619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.651 [2024-12-05 17:19:46.903627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.651 [2024-12-05 17:19:46.903673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:12.651 [2024-12-05 17:19:46.903682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:12.651 [2024-12-05 17:19:46.903691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:12.651 [2024-12-05 17:19:46.903698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:12.651 [2024-12-05 17:19:46.903833] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 211.890 ms, result 0 00:33:13.593 00:33:13.593 00:33:13.593 17:19:47 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:16.142 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:16.142 17:19:50 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:33:16.142 [2024-12-05 17:19:50.073113] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:33:16.142 [2024-12-05 17:19:50.073429] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85060 ] 00:33:16.142 [2024-12-05 17:19:50.236052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.142 [2024-12-05 17:19:50.352658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.403 [2024-12-05 17:19:50.647675] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:16.403 [2024-12-05 17:19:50.647750] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:16.667 [2024-12-05 17:19:50.809600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.667 [2024-12-05 17:19:50.809654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:16.667 [2024-12-05 17:19:50.809669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:16.667 [2024-12-05 17:19:50.809678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.667 [2024-12-05 17:19:50.809733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.667 [2024-12-05 17:19:50.809746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:16.667 [2024-12-05 17:19:50.809755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:33:16.667 [2024-12-05 17:19:50.809764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.667 [2024-12-05 17:19:50.809786] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:16.667 [2024-12-05 17:19:50.810560] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:16.667 [2024-12-05 17:19:50.810609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.667 [2024-12-05 17:19:50.810618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:16.667 [2024-12-05 17:19:50.810628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.830 ms 00:33:16.667 [2024-12-05 17:19:50.810636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.667 [2024-12-05 17:19:50.810917] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:16.667 [2024-12-05 17:19:50.810984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.667 [2024-12-05 17:19:50.810998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:16.667 [2024-12-05 17:19:50.811009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:33:16.667 [2024-12-05 17:19:50.811018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.667 [2024-12-05 17:19:50.811072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.667 [2024-12-05 17:19:50.811083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:16.667 [2024-12-05 17:19:50.811092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:33:16.667 [2024-12-05 17:19:50.811099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.667 [2024-12-05 17:19:50.811411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.667 [2024-12-05 17:19:50.811434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:16.667 [2024-12-05 17:19:50.811444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:33:16.667 [2024-12-05 17:19:50.811452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.667 [2024-12-05 17:19:50.811523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.667 [2024-12-05 17:19:50.811535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:16.668 [2024-12-05 17:19:50.811544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:33:16.668 [2024-12-05 17:19:50.811553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.668 [2024-12-05 17:19:50.811576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.668 [2024-12-05 17:19:50.811585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:16.668 [2024-12-05 17:19:50.811596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:33:16.668 [2024-12-05 17:19:50.811604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.668 [2024-12-05 17:19:50.811626] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:16.668 [2024-12-05 17:19:50.815883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.668 [2024-12-05 17:19:50.815919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:16.668 [2024-12-05 17:19:50.815930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.262 ms 00:33:16.668 [2024-12-05 17:19:50.815938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.668 [2024-12-05 17:19:50.815987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.668 [2024-12-05 17:19:50.815997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:16.668 [2024-12-05 17:19:50.816006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:33:16.668 [2024-12-05 17:19:50.816014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.668 [2024-12-05 17:19:50.816069] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:16.668 [2024-12-05 17:19:50.816094] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:16.668 [2024-12-05 17:19:50.816138] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:16.668 [2024-12-05 17:19:50.816155] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:16.668 [2024-12-05 17:19:50.816261] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:16.668 [2024-12-05 17:19:50.816273] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:16.668 [2024-12-05 17:19:50.816284] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:16.668 [2024-12-05 17:19:50.816294] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:16.668 [2024-12-05 17:19:50.816304] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:16.668 [2024-12-05 17:19:50.816315] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:16.668 [2024-12-05 17:19:50.816323] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:16.668 [2024-12-05 17:19:50.816331] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:16.668 [2024-12-05 17:19:50.816339] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:16.668 [2024-12-05 17:19:50.816347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.668 [2024-12-05 17:19:50.816354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:16.668 [2024-12-05 17:19:50.816362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:33:16.668 [2024-12-05 17:19:50.816370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.668 [2024-12-05 17:19:50.816452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.668 [2024-12-05 17:19:50.816461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:16.668 [2024-12-05 17:19:50.816469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:33:16.668 [2024-12-05 17:19:50.816479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.668 [2024-12-05 17:19:50.816581] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:16.668 [2024-12-05 17:19:50.816602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:16.668 [2024-12-05 17:19:50.816612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:16.668 [2024-12-05 17:19:50.816621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:16.668 [2024-12-05 17:19:50.816635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:16.668 [2024-12-05 17:19:50.816642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:16.668 [2024-12-05 17:19:50.816649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:16.668 [2024-12-05 17:19:50.816657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:16.668 [2024-12-05 17:19:50.816664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:16.668 [2024-12-05 17:19:50.816671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:16.668 [2024-12-05 17:19:50.816678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:16.668 [2024-12-05 17:19:50.816685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:16.668 [2024-12-05 17:19:50.816691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:16.668 [2024-12-05 17:19:50.816728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:16.668 [2024-12-05 17:19:50.816736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:16.668 [2024-12-05 17:19:50.816749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:16.668 [2024-12-05 17:19:50.816756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:16.668 [2024-12-05 17:19:50.816763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:16.668 [2024-12-05 17:19:50.816770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:16.668 [2024-12-05 17:19:50.816777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:16.668 [2024-12-05 17:19:50.816784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:16.668 [2024-12-05 17:19:50.816791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:16.668 [2024-12-05 17:19:50.816797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:16.668 [2024-12-05 17:19:50.816804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:16.668 [2024-12-05 17:19:50.816811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:16.668 [2024-12-05 17:19:50.816817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:16.668 [2024-12-05 17:19:50.816824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:16.668 [2024-12-05 17:19:50.816831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:16.668 [2024-12-05 17:19:50.816839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:16.668 [2024-12-05 17:19:50.816846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:16.668 [2024-12-05 17:19:50.816852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:16.668 [2024-12-05 17:19:50.816859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:16.668 [2024-12-05 17:19:50.816865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:16.668 [2024-12-05 17:19:50.816871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:16.668 [2024-12-05 17:19:50.816878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:16.668 [2024-12-05 17:19:50.816885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:16.668 [2024-12-05 17:19:50.816893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:16.668 [2024-12-05 17:19:50.816901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:16.668 [2024-12-05 17:19:50.816909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:16.668 [2024-12-05 17:19:50.816915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:16.668 [2024-12-05 17:19:50.816922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:16.668 [2024-12-05 17:19:50.816928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:16.668 [2024-12-05 17:19:50.816935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:16.668 [2024-12-05 17:19:50.816942] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:16.668 [2024-12-05 17:19:50.816967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:16.668 [2024-12-05 17:19:50.816975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:16.668 [2024-12-05 17:19:50.816983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:16.668 [2024-12-05 17:19:50.816993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:16.668 [2024-12-05 17:19:50.817000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:16.668 [2024-12-05 17:19:50.817007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:16.668 [2024-12-05 17:19:50.817014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:16.668 [2024-12-05 17:19:50.817020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:16.668 [2024-12-05 17:19:50.817027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:16.668 [2024-12-05 17:19:50.817036] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:16.668 [2024-12-05 17:19:50.817045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:16.668 [2024-12-05 17:19:50.817054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:16.668 [2024-12-05 17:19:50.817062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:16.668 [2024-12-05 17:19:50.817069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:16.668 [2024-12-05 17:19:50.817077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:16.668 [2024-12-05 17:19:50.817085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:16.668 [2024-12-05 17:19:50.817093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:16.668 [2024-12-05 17:19:50.817100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:16.668 [2024-12-05 17:19:50.817107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:16.668 [2024-12-05 17:19:50.817114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:16.668 [2024-12-05 17:19:50.817121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:16.669 [2024-12-05 17:19:50.817128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:16.669 [2024-12-05 17:19:50.817135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:16.669 [2024-12-05 17:19:50.817143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:16.669 [2024-12-05 17:19:50.817151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:16.669 [2024-12-05 17:19:50.817161] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:16.669 [2024-12-05 17:19:50.817171] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:16.669 [2024-12-05 17:19:50.817179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:16.669 [2024-12-05 17:19:50.817187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:16.669 [2024-12-05 17:19:50.817194] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:16.669 [2024-12-05 17:19:50.817201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:16.669 [2024-12-05 17:19:50.817208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.669 [2024-12-05 17:19:50.817216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:16.669 [2024-12-05 17:19:50.817224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:33:16.669 [2024-12-05 17:19:50.817231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.669 [2024-12-05 17:19:50.844818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.669 [2024-12-05 17:19:50.844855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:16.669 [2024-12-05 17:19:50.844866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.543 ms 00:33:16.669 [2024-12-05 17:19:50.844875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.669 [2024-12-05 17:19:50.844975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.669 [2024-12-05 17:19:50.844985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:16.669 [2024-12-05 17:19:50.844997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:33:16.669 [2024-12-05 17:19:50.845004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.669 [2024-12-05 17:19:50.892539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.669 [2024-12-05 17:19:50.892586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:16.669 [2024-12-05 17:19:50.892599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.478 ms 00:33:16.669 [2024-12-05 17:19:50.892608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.669 [2024-12-05 17:19:50.892656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.669 [2024-12-05 17:19:50.892667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:16.669 [2024-12-05 17:19:50.892676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:33:16.669 [2024-12-05 17:19:50.892685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.669 [2024-12-05 17:19:50.892809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.669 [2024-12-05 17:19:50.892821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:16.669 [2024-12-05 17:19:50.892831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:33:16.669 [2024-12-05 17:19:50.892839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.669 [2024-12-05 17:19:50.892992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.669 [2024-12-05 17:19:50.893007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:16.669 [2024-12-05 17:19:50.893016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:33:16.669 [2024-12-05 17:19:50.893023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.669 [2024-12-05 17:19:50.908478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.669 [2024-12-05 17:19:50.908516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:16.669 [2024-12-05 17:19:50.908528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.435 ms 00:33:16.669 [2024-12-05 17:19:50.908536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.669 [2024-12-05 17:19:50.908686] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:16.669 [2024-12-05 17:19:50.908716] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:16.669 [2024-12-05 17:19:50.908730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.669 [2024-12-05 17:19:50.908739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:16.669 [2024-12-05 17:19:50.908748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:33:16.669 [2024-12-05 17:19:50.908756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.669 [2024-12-05 17:19:50.921068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.669 [2024-12-05 17:19:50.921103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:16.669 [2024-12-05 17:19:50.921114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.294 ms 00:33:16.669 [2024-12-05 17:19:50.921121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.669 [2024-12-05 17:19:50.921250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.669 [2024-12-05 17:19:50.921261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:16.669 [2024-12-05 17:19:50.921270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:33:16.669 [2024-12-05 17:19:50.921283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.669 [2024-12-05 17:19:50.921333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.669 [2024-12-05 17:19:50.921344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:16.669 [2024-12-05 17:19:50.921360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:33:16.669 [2024-12-05 17:19:50.921367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.669 [2024-12-05 17:19:50.921980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.669 [2024-12-05 17:19:50.922005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:16.669 [2024-12-05 17:19:50.922016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:33:16.669 [2024-12-05 17:19:50.922024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.669 [2024-12-05 17:19:50.922048] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:16.669 [2024-12-05 17:19:50.922061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.669 [2024-12-05 17:19:50.922069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:16.669 [2024-12-05 17:19:50.922078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:33:16.669 [2024-12-05 17:19:50.922085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.669 [2024-12-05 17:19:50.934689] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:16.669 [2024-12-05 17:19:50.934843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.669 [2024-12-05 17:19:50.934854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:16.669 [2024-12-05 17:19:50.934865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.736 ms 00:33:16.669 [2024-12-05 17:19:50.934873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.669 [2024-12-05 17:19:50.937124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.669 [2024-12-05 17:19:50.937155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:16.669 [2024-12-05 17:19:50.937165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.228 ms 00:33:16.669 [2024-12-05 17:19:50.937174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.669 [2024-12-05 17:19:50.937264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.669 [2024-12-05 17:19:50.937275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:16.669 [2024-12-05 17:19:50.937284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:33:16.669 [2024-12-05 17:19:50.937294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.669 [2024-12-05 17:19:50.937318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.669 [2024-12-05 17:19:50.937333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:16.669 [2024-12-05 17:19:50.937342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:16.669 [2024-12-05 17:19:50.937350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.669 [2024-12-05 17:19:50.937382] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:16.669 [2024-12-05 17:19:50.937394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.669 [2024-12-05 17:19:50.937402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:16.669 [2024-12-05 17:19:50.937410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:33:16.669 [2024-12-05 17:19:50.937418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.669 [2024-12-05 17:19:50.964079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.669 [2024-12-05 17:19:50.964122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:16.669 [2024-12-05 17:19:50.964135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.640 ms 00:33:16.669 [2024-12-05 17:19:50.964144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.669 [2024-12-05 17:19:50.964227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:16.669 [2024-12-05 17:19:50.964238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:16.669 [2024-12-05 17:19:50.964247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:33:16.669 [2024-12-05 17:19:50.964255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:16.669 [2024-12-05 17:19:50.965632] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 155.580 ms, result 0 00:33:17.616  [2024-12-05T17:19:53.370Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-05T17:19:54.312Z] Copying: 28/1024 [MB] (15 MBps) [2024-12-05T17:19:55.255Z] Copying: 50/1024 [MB] (22 MBps) [2024-12-05T17:19:56.272Z] Copying: 65/1024 [MB] (14 MBps) [2024-12-05T17:19:57.226Z] Copying: 79/1024 [MB] (14 MBps) [2024-12-05T17:19:58.169Z] Copying: 89/1024 [MB] (10 MBps) [2024-12-05T17:19:59.112Z] Copying: 101/1024 [MB] (12 MBps) [2024-12-05T17:20:00.057Z] Copying: 122/1024 [MB] (20 MBps) [2024-12-05T17:20:01.002Z] Copying: 136/1024 [MB] (13 MBps) [2024-12-05T17:20:02.387Z] Copying: 152/1024 [MB] (16 MBps) [2024-12-05T17:20:03.331Z] Copying: 181/1024 [MB] (28 MBps) [2024-12-05T17:20:04.276Z] Copying: 207/1024 [MB] (26 MBps) [2024-12-05T17:20:05.221Z] Copying: 225/1024 [MB] (18 MBps) [2024-12-05T17:20:06.163Z] Copying: 249/1024 [MB] (23 MBps) [2024-12-05T17:20:07.105Z] Copying: 265/1024 [MB] (15 MBps) [2024-12-05T17:20:08.051Z] Copying: 291/1024 [MB] (26 MBps) [2024-12-05T17:20:08.998Z] Copying: 303/1024 [MB] (12 MBps) [2024-12-05T17:20:10.386Z] Copying: 319/1024 [MB] (15 MBps) [2024-12-05T17:20:11.331Z] Copying: 333/1024 [MB] (14 MBps) [2024-12-05T17:20:12.277Z] Copying: 351/1024 [MB] (17 MBps) [2024-12-05T17:20:13.223Z] Copying: 369/1024 [MB] (17 MBps) [2024-12-05T17:20:14.169Z] Copying: 381/1024 [MB] (11 MBps) [2024-12-05T17:20:15.112Z] Copying: 391/1024 [MB] (10 MBps) [2024-12-05T17:20:16.055Z] Copying: 404/1024 [MB] (12 MBps) [2024-12-05T17:20:17.001Z] Copying: 417/1024 [MB] (13 MBps) [2024-12-05T17:20:18.389Z] Copying: 430/1024 [MB] (12 MBps) [2024-12-05T17:20:19.333Z] Copying: 455/1024 [MB] (25 MBps) [2024-12-05T17:20:20.278Z] Copying: 482/1024 [MB] (26 MBps) [2024-12-05T17:20:21.223Z] Copying: 499/1024 [MB] (17 MBps) [2024-12-05T17:20:22.167Z] Copying: 515/1024 [MB] (15 MBps) [2024-12-05T17:20:23.110Z] Copying: 537/1024 [MB] (21 MBps) [2024-12-05T17:20:24.052Z] Copying: 558/1024 [MB] (21 MBps) [2024-12-05T17:20:25.028Z] Copying: 585/1024 [MB] (27 MBps) [2024-12-05T17:20:25.995Z] Copying: 600/1024 [MB] (14 MBps) [2024-12-05T17:20:27.400Z] Copying: 626/1024 [MB] (26 MBps) [2024-12-05T17:20:28.345Z] Copying: 646/1024 [MB] (19 MBps) [2024-12-05T17:20:29.290Z] Copying: 667/1024 [MB] (20 MBps) [2024-12-05T17:20:30.236Z] Copying: 693/1024 [MB] (26 MBps) [2024-12-05T17:20:31.181Z] Copying: 708/1024 [MB] (14 MBps) [2024-12-05T17:20:32.125Z] Copying: 722/1024 [MB] (13 MBps) [2024-12-05T17:20:33.069Z] Copying: 747/1024 [MB] (24 MBps) [2024-12-05T17:20:34.013Z] Copying: 766/1024 [MB] (19 MBps) [2024-12-05T17:20:35.400Z] Copying: 777/1024 [MB] (11 MBps) [2024-12-05T17:20:36.341Z] Copying: 802/1024 [MB] (24 MBps) [2024-12-05T17:20:37.286Z] Copying: 821/1024 [MB] (19 MBps) [2024-12-05T17:20:38.231Z] Copying: 848/1024 [MB] (27 MBps) [2024-12-05T17:20:39.176Z] Copying: 867/1024 [MB] (18 MBps) [2024-12-05T17:20:40.121Z] Copying: 883/1024 [MB] (15 MBps) [2024-12-05T17:20:41.065Z] Copying: 906/1024 [MB] (23 MBps) [2024-12-05T17:20:42.009Z] Copying: 922/1024 [MB] (16 MBps) [2024-12-05T17:20:43.396Z] Copying: 941/1024 [MB] (18 MBps) [2024-12-05T17:20:44.337Z] Copying: 964/1024 [MB] (23 MBps) [2024-12-05T17:20:45.285Z] Copying: 985/1024 [MB] (20 MBps) [2024-12-05T17:20:46.226Z] Copying: 1001/1024 [MB] (16 MBps) [2024-12-05T17:20:46.802Z] Copying: 1023/1024 [MB] (21 MBps) [2024-12-05T17:20:46.802Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-05 17:20:46.755446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.435 [2024-12-05 17:20:46.755496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:12.435 [2024-12-05 17:20:46.755510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:12.435 [2024-12-05 17:20:46.755519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.435 [2024-12-05 17:20:46.756839] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:12.435 [2024-12-05 17:20:46.761851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.435 [2024-12-05 17:20:46.761888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:12.435 [2024-12-05 17:20:46.761900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.982 ms 00:34:12.435 [2024-12-05 17:20:46.761908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.435 [2024-12-05 17:20:46.771739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.435 [2024-12-05 17:20:46.771771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:12.435 [2024-12-05 17:20:46.771781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.469 ms 00:34:12.435 [2024-12-05 17:20:46.771789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.435 [2024-12-05 17:20:46.771814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.435 [2024-12-05 17:20:46.771823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:12.435 [2024-12-05 17:20:46.771832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:12.435 [2024-12-05 17:20:46.771839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.435 [2024-12-05 17:20:46.771884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.435 [2024-12-05 17:20:46.771895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:12.435 [2024-12-05 17:20:46.771903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:34:12.435 [2024-12-05 17:20:46.771910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.436 [2024-12-05 17:20:46.771922] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:12.436 [2024-12-05 17:20:46.771934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 126464 / 261120 wr_cnt: 1 state: open 00:34:12.436 [2024-12-05 17:20:46.771943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.771961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.771969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.771976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.771983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.771991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.771998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:12.436 [2024-12-05 17:20:46.772583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:12.437 [2024-12-05 17:20:46.772591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:12.437 [2024-12-05 17:20:46.772599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:12.437 [2024-12-05 17:20:46.772606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:12.437 [2024-12-05 17:20:46.772614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:12.437 [2024-12-05 17:20:46.772621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:12.437 [2024-12-05 17:20:46.772629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:12.437 [2024-12-05 17:20:46.772636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:12.437 [2024-12-05 17:20:46.772643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:12.437 [2024-12-05 17:20:46.772650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:12.437 [2024-12-05 17:20:46.772657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:12.437 [2024-12-05 17:20:46.772664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:12.437 [2024-12-05 17:20:46.772672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:12.437 [2024-12-05 17:20:46.772679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:12.437 [2024-12-05 17:20:46.772686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:12.437 [2024-12-05 17:20:46.772701] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:12.437 [2024-12-05 17:20:46.772708] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f853cac7-bcba-40a7-941e-823694b449b9 00:34:12.437 [2024-12-05 17:20:46.772725] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 126464 00:34:12.437 [2024-12-05 17:20:46.772732] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 126496 00:34:12.437 [2024-12-05 17:20:46.772739] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 126464 00:34:12.437 [2024-12-05 17:20:46.772747] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:34:12.437 [2024-12-05 17:20:46.772756] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:12.437 [2024-12-05 17:20:46.772764] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:12.437 [2024-12-05 17:20:46.772772] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:12.437 [2024-12-05 17:20:46.772779] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:12.437 [2024-12-05 17:20:46.772785] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:12.437 [2024-12-05 17:20:46.772792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.437 [2024-12-05 17:20:46.772799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:12.437 [2024-12-05 17:20:46.772807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.870 ms 00:34:12.437 [2024-12-05 17:20:46.772814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.437 [2024-12-05 17:20:46.785284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.437 [2024-12-05 17:20:46.785311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:12.437 [2024-12-05 17:20:46.785325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.456 ms 00:34:12.437 [2024-12-05 17:20:46.785333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.437 [2024-12-05 17:20:46.785663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:12.437 [2024-12-05 17:20:46.785671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:12.437 [2024-12-05 17:20:46.785679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:34:12.437 [2024-12-05 17:20:46.785686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.699 [2024-12-05 17:20:46.818502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:12.699 [2024-12-05 17:20:46.818532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:12.699 [2024-12-05 17:20:46.818541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:12.699 [2024-12-05 17:20:46.818548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.699 [2024-12-05 17:20:46.818596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:12.699 [2024-12-05 17:20:46.818604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:12.699 [2024-12-05 17:20:46.818611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:12.699 [2024-12-05 17:20:46.818618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.699 [2024-12-05 17:20:46.818666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:12.699 [2024-12-05 17:20:46.818675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:12.699 [2024-12-05 17:20:46.818685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:12.699 [2024-12-05 17:20:46.818692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.699 [2024-12-05 17:20:46.818707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:12.699 [2024-12-05 17:20:46.818715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:12.699 [2024-12-05 17:20:46.818722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:12.699 [2024-12-05 17:20:46.818729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.699 [2024-12-05 17:20:46.897244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:12.699 [2024-12-05 17:20:46.897285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:12.699 [2024-12-05 17:20:46.897296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:12.699 [2024-12-05 17:20:46.897303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.699 [2024-12-05 17:20:46.964765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:12.699 [2024-12-05 17:20:46.964822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:12.699 [2024-12-05 17:20:46.964834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:12.699 [2024-12-05 17:20:46.964843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.699 [2024-12-05 17:20:46.964925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:12.699 [2024-12-05 17:20:46.964936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:12.699 [2024-12-05 17:20:46.964944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:12.699 [2024-12-05 17:20:46.964981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.699 [2024-12-05 17:20:46.965016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:12.699 [2024-12-05 17:20:46.965025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:12.699 [2024-12-05 17:20:46.965034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:12.699 [2024-12-05 17:20:46.965042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.699 [2024-12-05 17:20:46.965120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:12.699 [2024-12-05 17:20:46.965130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:12.699 [2024-12-05 17:20:46.965139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:12.699 [2024-12-05 17:20:46.965147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.699 [2024-12-05 17:20:46.965176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:12.699 [2024-12-05 17:20:46.965185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:12.699 [2024-12-05 17:20:46.965193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:12.699 [2024-12-05 17:20:46.965201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.699 [2024-12-05 17:20:46.965238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:12.699 [2024-12-05 17:20:46.965247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:12.699 [2024-12-05 17:20:46.965255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:12.699 [2024-12-05 17:20:46.965263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.699 [2024-12-05 17:20:46.965311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:12.699 [2024-12-05 17:20:46.965321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:12.699 [2024-12-05 17:20:46.965330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:12.699 [2024-12-05 17:20:46.965338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:12.699 [2024-12-05 17:20:46.965468] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 213.605 ms, result 0 00:34:14.087 00:34:14.087 00:34:14.087 17:20:48 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:34:14.348 [2024-12-05 17:20:48.497592] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:34:14.348 [2024-12-05 17:20:48.497723] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85640 ] 00:34:14.348 [2024-12-05 17:20:48.659226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:14.610 [2024-12-05 17:20:48.769861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:14.873 [2024-12-05 17:20:49.067887] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:14.873 [2024-12-05 17:20:49.067993] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:14.873 [2024-12-05 17:20:49.229512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.873 [2024-12-05 17:20:49.229574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:14.873 [2024-12-05 17:20:49.229591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:14.873 [2024-12-05 17:20:49.229600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.873 [2024-12-05 17:20:49.229653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.873 [2024-12-05 17:20:49.229667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:14.873 [2024-12-05 17:20:49.229676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:34:14.873 [2024-12-05 17:20:49.229684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.873 [2024-12-05 17:20:49.229705] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:14.873 [2024-12-05 17:20:49.230459] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:14.873 [2024-12-05 17:20:49.230478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.873 [2024-12-05 17:20:49.230487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:14.873 [2024-12-05 17:20:49.230497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.778 ms 00:34:14.873 [2024-12-05 17:20:49.230505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.873 [2024-12-05 17:20:49.230787] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:34:14.873 [2024-12-05 17:20:49.230814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.873 [2024-12-05 17:20:49.230827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:14.873 [2024-12-05 17:20:49.230836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:34:14.873 [2024-12-05 17:20:49.230845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.873 [2024-12-05 17:20:49.230901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.873 [2024-12-05 17:20:49.230911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:14.873 [2024-12-05 17:20:49.230919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:34:14.873 [2024-12-05 17:20:49.230926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.873 [2024-12-05 17:20:49.231385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.873 [2024-12-05 17:20:49.231408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:14.873 [2024-12-05 17:20:49.231418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:34:14.873 [2024-12-05 17:20:49.231426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.873 [2024-12-05 17:20:49.231506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.873 [2024-12-05 17:20:49.231517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:14.873 [2024-12-05 17:20:49.231525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:34:14.873 [2024-12-05 17:20:49.231533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.873 [2024-12-05 17:20:49.231556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.873 [2024-12-05 17:20:49.231565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:14.873 [2024-12-05 17:20:49.231576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:34:14.873 [2024-12-05 17:20:49.231584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.873 [2024-12-05 17:20:49.231605] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:14.873 [2024-12-05 17:20:49.236000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.873 [2024-12-05 17:20:49.236040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:14.873 [2024-12-05 17:20:49.236050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.400 ms 00:34:14.873 [2024-12-05 17:20:49.236058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.873 [2024-12-05 17:20:49.236097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.873 [2024-12-05 17:20:49.236105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:14.873 [2024-12-05 17:20:49.236114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:34:14.873 [2024-12-05 17:20:49.236121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.873 [2024-12-05 17:20:49.236181] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:14.873 [2024-12-05 17:20:49.236209] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:14.873 [2024-12-05 17:20:49.236248] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:14.873 [2024-12-05 17:20:49.236265] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:14.873 [2024-12-05 17:20:49.236370] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:14.873 [2024-12-05 17:20:49.236380] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:14.873 [2024-12-05 17:20:49.236391] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:14.873 [2024-12-05 17:20:49.236401] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:14.873 [2024-12-05 17:20:49.236410] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:14.873 [2024-12-05 17:20:49.236421] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:14.873 [2024-12-05 17:20:49.236429] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:14.873 [2024-12-05 17:20:49.236436] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:14.873 [2024-12-05 17:20:49.236444] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:14.873 [2024-12-05 17:20:49.236452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.873 [2024-12-05 17:20:49.236460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:14.873 [2024-12-05 17:20:49.236468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:34:14.873 [2024-12-05 17:20:49.236475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.873 [2024-12-05 17:20:49.236559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.873 [2024-12-05 17:20:49.236567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:14.873 [2024-12-05 17:20:49.236574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:34:14.873 [2024-12-05 17:20:49.236584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:14.873 [2024-12-05 17:20:49.236687] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:14.873 [2024-12-05 17:20:49.236697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:14.873 [2024-12-05 17:20:49.236706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:14.874 [2024-12-05 17:20:49.236726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:14.874 [2024-12-05 17:20:49.236737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:14.874 [2024-12-05 17:20:49.236745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:14.874 [2024-12-05 17:20:49.236752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:14.874 [2024-12-05 17:20:49.236760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:14.874 [2024-12-05 17:20:49.236767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:14.874 [2024-12-05 17:20:49.236775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:14.874 [2024-12-05 17:20:49.236782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:14.874 [2024-12-05 17:20:49.236789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:14.874 [2024-12-05 17:20:49.236796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:14.874 [2024-12-05 17:20:49.236803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:14.874 [2024-12-05 17:20:49.236810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:14.874 [2024-12-05 17:20:49.236823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:14.874 [2024-12-05 17:20:49.236830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:14.874 [2024-12-05 17:20:49.236837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:14.874 [2024-12-05 17:20:49.236844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:14.874 [2024-12-05 17:20:49.236851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:14.874 [2024-12-05 17:20:49.236859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:14.874 [2024-12-05 17:20:49.236866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:14.874 [2024-12-05 17:20:49.236873] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:14.874 [2024-12-05 17:20:49.236881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:14.874 [2024-12-05 17:20:49.236887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:14.874 [2024-12-05 17:20:49.236894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:14.874 [2024-12-05 17:20:49.236901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:14.874 [2024-12-05 17:20:49.236908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:14.874 [2024-12-05 17:20:49.236914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:14.874 [2024-12-05 17:20:49.236921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:14.874 [2024-12-05 17:20:49.236928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:14.874 [2024-12-05 17:20:49.236935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:14.874 [2024-12-05 17:20:49.236941] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:14.874 [2024-12-05 17:20:49.236963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:14.874 [2024-12-05 17:20:49.236970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:14.874 [2024-12-05 17:20:49.236976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:14.874 [2024-12-05 17:20:49.236986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:14.874 [2024-12-05 17:20:49.236993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:14.874 [2024-12-05 17:20:49.237000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:14.874 [2024-12-05 17:20:49.237007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:14.874 [2024-12-05 17:20:49.237014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:14.874 [2024-12-05 17:20:49.237021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:14.874 [2024-12-05 17:20:49.237028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:14.874 [2024-12-05 17:20:49.237034] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:14.874 [2024-12-05 17:20:49.237042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:14.874 [2024-12-05 17:20:49.237051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:14.874 [2024-12-05 17:20:49.237059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:14.874 [2024-12-05 17:20:49.237069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:14.874 [2024-12-05 17:20:49.237077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:14.874 [2024-12-05 17:20:49.237083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:14.874 [2024-12-05 17:20:49.237090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:14.874 [2024-12-05 17:20:49.237097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:14.874 [2024-12-05 17:20:49.237104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:14.874 [2024-12-05 17:20:49.237112] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:14.874 [2024-12-05 17:20:49.237123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:14.874 [2024-12-05 17:20:49.237132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:14.874 [2024-12-05 17:20:49.237139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:14.874 [2024-12-05 17:20:49.237147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:14.874 [2024-12-05 17:20:49.237154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:14.874 [2024-12-05 17:20:49.237161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:14.874 [2024-12-05 17:20:49.237168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:14.874 [2024-12-05 17:20:49.237176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:14.874 [2024-12-05 17:20:49.237183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:14.874 [2024-12-05 17:20:49.237192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:14.874 [2024-12-05 17:20:49.237199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:14.874 [2024-12-05 17:20:49.237206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:14.874 [2024-12-05 17:20:49.237214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:14.874 [2024-12-05 17:20:49.237221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:14.874 [2024-12-05 17:20:49.237231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:14.874 [2024-12-05 17:20:49.237239] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:14.874 [2024-12-05 17:20:49.237248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:14.874 [2024-12-05 17:20:49.237256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:14.874 [2024-12-05 17:20:49.237263] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:14.874 [2024-12-05 17:20:49.237271] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:14.874 [2024-12-05 17:20:49.237279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:14.874 [2024-12-05 17:20:49.237286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:14.874 [2024-12-05 17:20:49.237294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:14.874 [2024-12-05 17:20:49.237301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.668 ms 00:34:14.875 [2024-12-05 17:20:49.237308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.137 [2024-12-05 17:20:49.265158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.137 [2024-12-05 17:20:49.265197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:15.137 [2024-12-05 17:20:49.265209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.807 ms 00:34:15.137 [2024-12-05 17:20:49.265218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.137 [2024-12-05 17:20:49.265309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.137 [2024-12-05 17:20:49.265319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:15.137 [2024-12-05 17:20:49.265331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:34:15.137 [2024-12-05 17:20:49.265339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.137 [2024-12-05 17:20:49.309257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.137 [2024-12-05 17:20:49.309307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:15.137 [2024-12-05 17:20:49.309320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.861 ms 00:34:15.137 [2024-12-05 17:20:49.309329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.137 [2024-12-05 17:20:49.309382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.137 [2024-12-05 17:20:49.309392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:15.137 [2024-12-05 17:20:49.309402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:34:15.137 [2024-12-05 17:20:49.309411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.137 [2024-12-05 17:20:49.309526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.137 [2024-12-05 17:20:49.309539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:15.137 [2024-12-05 17:20:49.309548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:34:15.137 [2024-12-05 17:20:49.309557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.137 [2024-12-05 17:20:49.309686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.137 [2024-12-05 17:20:49.309698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:15.137 [2024-12-05 17:20:49.309707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:34:15.137 [2024-12-05 17:20:49.309716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.137 [2024-12-05 17:20:49.325698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.137 [2024-12-05 17:20:49.325739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:15.137 [2024-12-05 17:20:49.325750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.962 ms 00:34:15.137 [2024-12-05 17:20:49.325758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.137 [2024-12-05 17:20:49.325911] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:34:15.137 [2024-12-05 17:20:49.325926] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:15.137 [2024-12-05 17:20:49.325939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.137 [2024-12-05 17:20:49.325976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:15.137 [2024-12-05 17:20:49.325987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:34:15.137 [2024-12-05 17:20:49.325994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.137 [2024-12-05 17:20:49.338296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.137 [2024-12-05 17:20:49.338336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:15.137 [2024-12-05 17:20:49.338347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.284 ms 00:34:15.137 [2024-12-05 17:20:49.338355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.137 [2024-12-05 17:20:49.338487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.137 [2024-12-05 17:20:49.338505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:15.137 [2024-12-05 17:20:49.338515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:34:15.137 [2024-12-05 17:20:49.338526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.137 [2024-12-05 17:20:49.338577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.137 [2024-12-05 17:20:49.338586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:15.138 [2024-12-05 17:20:49.338595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:34:15.138 [2024-12-05 17:20:49.338609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.138 [2024-12-05 17:20:49.339224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.138 [2024-12-05 17:20:49.339246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:15.138 [2024-12-05 17:20:49.339256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:34:15.138 [2024-12-05 17:20:49.339263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.138 [2024-12-05 17:20:49.339284] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:34:15.138 [2024-12-05 17:20:49.339295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.138 [2024-12-05 17:20:49.339303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:15.138 [2024-12-05 17:20:49.339311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:34:15.138 [2024-12-05 17:20:49.339318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.138 [2024-12-05 17:20:49.351874] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:15.138 [2024-12-05 17:20:49.352110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.138 [2024-12-05 17:20:49.352123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:15.138 [2024-12-05 17:20:49.352133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.772 ms 00:34:15.138 [2024-12-05 17:20:49.352141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.138 [2024-12-05 17:20:49.354391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.138 [2024-12-05 17:20:49.354423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:15.138 [2024-12-05 17:20:49.354434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.221 ms 00:34:15.138 [2024-12-05 17:20:49.354441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.138 [2024-12-05 17:20:49.354520] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:34:15.138 [2024-12-05 17:20:49.354999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.138 [2024-12-05 17:20:49.355017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:15.138 [2024-12-05 17:20:49.355028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:34:15.138 [2024-12-05 17:20:49.355040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.138 [2024-12-05 17:20:49.355067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.138 [2024-12-05 17:20:49.355076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:15.138 [2024-12-05 17:20:49.355085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:15.138 [2024-12-05 17:20:49.355093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.138 [2024-12-05 17:20:49.355127] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:15.138 [2024-12-05 17:20:49.355137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.138 [2024-12-05 17:20:49.355146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:15.138 [2024-12-05 17:20:49.355155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:34:15.138 [2024-12-05 17:20:49.355162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.138 [2024-12-05 17:20:49.381865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.138 [2024-12-05 17:20:49.381912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:15.138 [2024-12-05 17:20:49.381925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.681 ms 00:34:15.138 [2024-12-05 17:20:49.381933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.138 [2024-12-05 17:20:49.382027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:15.138 [2024-12-05 17:20:49.382038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:15.138 [2024-12-05 17:20:49.382048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:34:15.138 [2024-12-05 17:20:49.382065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:15.138 [2024-12-05 17:20:49.383405] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 153.389 ms, result 0 00:34:16.528  [2024-12-05T17:20:51.843Z] Copying: 15/1024 [MB] (15 MBps) [2024-12-05T17:20:52.786Z] Copying: 25/1024 [MB] (10 MBps) [2024-12-05T17:20:53.729Z] Copying: 35/1024 [MB] (10 MBps) [2024-12-05T17:20:54.776Z] Copying: 46/1024 [MB] (10 MBps) [2024-12-05T17:20:55.716Z] Copying: 66/1024 [MB] (19 MBps) [2024-12-05T17:20:56.657Z] Copying: 79/1024 [MB] (13 MBps) [2024-12-05T17:20:57.601Z] Copying: 90/1024 [MB] (10 MBps) [2024-12-05T17:20:59.002Z] Copying: 105/1024 [MB] (15 MBps) [2024-12-05T17:20:59.574Z] Copying: 127/1024 [MB] (21 MBps) [2024-12-05T17:21:00.963Z] Copying: 141/1024 [MB] (14 MBps) [2024-12-05T17:21:01.908Z] Copying: 155/1024 [MB] (13 MBps) [2024-12-05T17:21:02.853Z] Copying: 175/1024 [MB] (20 MBps) [2024-12-05T17:21:03.797Z] Copying: 197/1024 [MB] (21 MBps) [2024-12-05T17:21:04.741Z] Copying: 210/1024 [MB] (13 MBps) [2024-12-05T17:21:05.690Z] Copying: 228/1024 [MB] (18 MBps) [2024-12-05T17:21:06.635Z] Copying: 248/1024 [MB] (19 MBps) [2024-12-05T17:21:07.580Z] Copying: 268/1024 [MB] (20 MBps) [2024-12-05T17:21:08.969Z] Copying: 293/1024 [MB] (24 MBps) [2024-12-05T17:21:09.913Z] Copying: 308/1024 [MB] (15 MBps) [2024-12-05T17:21:10.858Z] Copying: 319/1024 [MB] (10 MBps) [2024-12-05T17:21:11.800Z] Copying: 335/1024 [MB] (16 MBps) [2024-12-05T17:21:12.744Z] Copying: 358/1024 [MB] (22 MBps) [2024-12-05T17:21:13.687Z] Copying: 369/1024 [MB] (11 MBps) [2024-12-05T17:21:14.629Z] Copying: 382/1024 [MB] (13 MBps) [2024-12-05T17:21:15.595Z] Copying: 398/1024 [MB] (16 MBps) [2024-12-05T17:21:16.982Z] Copying: 417/1024 [MB] (18 MBps) [2024-12-05T17:21:17.927Z] Copying: 439/1024 [MB] (21 MBps) [2024-12-05T17:21:18.871Z] Copying: 458/1024 [MB] (18 MBps) [2024-12-05T17:21:19.814Z] Copying: 481/1024 [MB] (23 MBps) [2024-12-05T17:21:20.757Z] Copying: 492/1024 [MB] (11 MBps) [2024-12-05T17:21:21.698Z] Copying: 503/1024 [MB] (10 MBps) [2024-12-05T17:21:22.639Z] Copying: 525/1024 [MB] (22 MBps) [2024-12-05T17:21:23.600Z] Copying: 536/1024 [MB] (10 MBps) [2024-12-05T17:21:24.993Z] Copying: 550/1024 [MB] (13 MBps) [2024-12-05T17:21:25.934Z] Copying: 569/1024 [MB] (19 MBps) [2024-12-05T17:21:26.880Z] Copying: 586/1024 [MB] (16 MBps) [2024-12-05T17:21:27.853Z] Copying: 608/1024 [MB] (21 MBps) [2024-12-05T17:21:28.796Z] Copying: 631/1024 [MB] (23 MBps) [2024-12-05T17:21:29.741Z] Copying: 654/1024 [MB] (22 MBps) [2024-12-05T17:21:30.685Z] Copying: 670/1024 [MB] (16 MBps) [2024-12-05T17:21:31.630Z] Copying: 688/1024 [MB] (18 MBps) [2024-12-05T17:21:32.575Z] Copying: 704/1024 [MB] (15 MBps) [2024-12-05T17:21:33.961Z] Copying: 722/1024 [MB] (18 MBps) [2024-12-05T17:21:34.907Z] Copying: 742/1024 [MB] (20 MBps) [2024-12-05T17:21:35.850Z] Copying: 768/1024 [MB] (25 MBps) [2024-12-05T17:21:36.795Z] Copying: 787/1024 [MB] (18 MBps) [2024-12-05T17:21:37.740Z] Copying: 798/1024 [MB] (10 MBps) [2024-12-05T17:21:38.684Z] Copying: 812/1024 [MB] (14 MBps) [2024-12-05T17:21:39.628Z] Copying: 823/1024 [MB] (11 MBps) [2024-12-05T17:21:41.017Z] Copying: 839/1024 [MB] (15 MBps) [2024-12-05T17:21:41.589Z] Copying: 852/1024 [MB] (13 MBps) [2024-12-05T17:21:42.976Z] Copying: 864/1024 [MB] (12 MBps) [2024-12-05T17:21:43.920Z] Copying: 876/1024 [MB] (11 MBps) [2024-12-05T17:21:44.860Z] Copying: 888/1024 [MB] (12 MBps) [2024-12-05T17:21:45.802Z] Copying: 903/1024 [MB] (15 MBps) [2024-12-05T17:21:46.745Z] Copying: 919/1024 [MB] (15 MBps) [2024-12-05T17:21:47.685Z] Copying: 938/1024 [MB] (19 MBps) [2024-12-05T17:21:48.626Z] Copying: 958/1024 [MB] (19 MBps) [2024-12-05T17:21:50.011Z] Copying: 970/1024 [MB] (11 MBps) [2024-12-05T17:21:50.582Z] Copying: 986/1024 [MB] (16 MBps) [2024-12-05T17:21:51.966Z] Copying: 999/1024 [MB] (12 MBps) [2024-12-05T17:21:52.982Z] Copying: 1009/1024 [MB] (10 MBps) [2024-12-05T17:21:52.982Z] Copying: 1022/1024 [MB] (12 MBps) [2024-12-05T17:21:53.245Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-05 17:21:53.016909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.878 [2024-12-05 17:21:53.016966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:18.878 [2024-12-05 17:21:53.016977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:35:18.878 [2024-12-05 17:21:53.016983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.878 [2024-12-05 17:21:53.017001] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:18.878 [2024-12-05 17:21:53.019094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.878 [2024-12-05 17:21:53.019122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:18.878 [2024-12-05 17:21:53.019135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.081 ms 00:35:18.878 [2024-12-05 17:21:53.019141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.878 [2024-12-05 17:21:53.019312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.878 [2024-12-05 17:21:53.019327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:18.878 [2024-12-05 17:21:53.019334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:35:18.878 [2024-12-05 17:21:53.019340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.878 [2024-12-05 17:21:53.019360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.878 [2024-12-05 17:21:53.019366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:35:18.878 [2024-12-05 17:21:53.019372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:35:18.878 [2024-12-05 17:21:53.019378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.878 [2024-12-05 17:21:53.019417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.878 [2024-12-05 17:21:53.019424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:35:18.878 [2024-12-05 17:21:53.019430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:35:18.878 [2024-12-05 17:21:53.019436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.878 [2024-12-05 17:21:53.019446] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:18.878 [2024-12-05 17:21:53.019456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131840 / 261120 wr_cnt: 1 state: open 00:35:18.878 [2024-12-05 17:21:53.019464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:18.878 [2024-12-05 17:21:53.019825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.019995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.020001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.020006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.020012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.020018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.020024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.020030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.020036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.020041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.020048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:18.879 [2024-12-05 17:21:53.020059] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:18.879 [2024-12-05 17:21:53.020065] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f853cac7-bcba-40a7-941e-823694b449b9 00:35:18.879 [2024-12-05 17:21:53.020071] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131840 00:35:18.879 [2024-12-05 17:21:53.020076] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 5408 00:35:18.879 [2024-12-05 17:21:53.020084] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 5376 00:35:18.879 [2024-12-05 17:21:53.020090] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0060 00:35:18.879 [2024-12-05 17:21:53.020096] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:18.879 [2024-12-05 17:21:53.020101] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:18.879 [2024-12-05 17:21:53.020106] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:18.879 [2024-12-05 17:21:53.020111] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:18.879 [2024-12-05 17:21:53.020115] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:18.879 [2024-12-05 17:21:53.020121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.879 [2024-12-05 17:21:53.020126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:18.879 [2024-12-05 17:21:53.020131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:35:18.879 [2024-12-05 17:21:53.020137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.879 [2024-12-05 17:21:53.030071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.879 [2024-12-05 17:21:53.030102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:18.879 [2024-12-05 17:21:53.030110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.921 ms 00:35:18.879 [2024-12-05 17:21:53.030116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.879 [2024-12-05 17:21:53.030381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:18.879 [2024-12-05 17:21:53.030417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:18.879 [2024-12-05 17:21:53.030428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:35:18.879 [2024-12-05 17:21:53.030435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.879 [2024-12-05 17:21:53.058269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:18.879 [2024-12-05 17:21:53.058297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:18.879 [2024-12-05 17:21:53.058305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:18.879 [2024-12-05 17:21:53.058311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.879 [2024-12-05 17:21:53.058356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:18.879 [2024-12-05 17:21:53.058362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:18.879 [2024-12-05 17:21:53.058369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:18.879 [2024-12-05 17:21:53.058374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.879 [2024-12-05 17:21:53.058416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:18.879 [2024-12-05 17:21:53.058424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:18.879 [2024-12-05 17:21:53.058431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:18.879 [2024-12-05 17:21:53.058436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.879 [2024-12-05 17:21:53.058448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:18.879 [2024-12-05 17:21:53.058454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:18.879 [2024-12-05 17:21:53.058460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:18.879 [2024-12-05 17:21:53.058465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.879 [2024-12-05 17:21:53.117487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:18.879 [2024-12-05 17:21:53.117517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:18.879 [2024-12-05 17:21:53.117526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:18.879 [2024-12-05 17:21:53.117532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.879 [2024-12-05 17:21:53.166417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:18.879 [2024-12-05 17:21:53.166451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:18.879 [2024-12-05 17:21:53.166459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:18.879 [2024-12-05 17:21:53.166466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.879 [2024-12-05 17:21:53.166522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:18.879 [2024-12-05 17:21:53.166531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:18.879 [2024-12-05 17:21:53.166537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:18.879 [2024-12-05 17:21:53.166543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.879 [2024-12-05 17:21:53.166568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:18.879 [2024-12-05 17:21:53.166575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:18.879 [2024-12-05 17:21:53.166581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:18.879 [2024-12-05 17:21:53.166587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.879 [2024-12-05 17:21:53.166640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:18.879 [2024-12-05 17:21:53.166648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:18.879 [2024-12-05 17:21:53.166656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:18.879 [2024-12-05 17:21:53.166662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.879 [2024-12-05 17:21:53.166681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:18.879 [2024-12-05 17:21:53.166688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:18.879 [2024-12-05 17:21:53.166694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:18.879 [2024-12-05 17:21:53.166699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.879 [2024-12-05 17:21:53.166725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:18.879 [2024-12-05 17:21:53.166737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:18.880 [2024-12-05 17:21:53.166745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:18.880 [2024-12-05 17:21:53.166751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.880 [2024-12-05 17:21:53.166782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:18.880 [2024-12-05 17:21:53.166794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:18.880 [2024-12-05 17:21:53.166800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:18.880 [2024-12-05 17:21:53.166806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:18.880 [2024-12-05 17:21:53.166892] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 149.961 ms, result 0 00:35:19.452 00:35:19.452 00:35:19.452 17:21:53 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:21.364 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:35:21.364 17:21:55 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:35:21.364 17:21:55 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:35:21.364 17:21:55 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:35:21.624 17:21:55 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:21.624 17:21:55 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:21.624 Process with pid 83608 is not found 00:35:21.624 17:21:55 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 83608 00:35:21.624 17:21:55 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 83608 ']' 00:35:21.624 17:21:55 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 83608 00:35:21.624 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (83608) - No such process 00:35:21.624 17:21:55 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 83608 is not found' 00:35:21.624 17:21:55 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:35:21.624 Remove shared memory files 00:35:21.624 17:21:55 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:21.624 17:21:55 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:35:21.625 17:21:55 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_f853cac7-bcba-40a7-941e-823694b449b9_band_md /dev/hugepages/ftl_f853cac7-bcba-40a7-941e-823694b449b9_l2p_l1 /dev/hugepages/ftl_f853cac7-bcba-40a7-941e-823694b449b9_l2p_l2 /dev/hugepages/ftl_f853cac7-bcba-40a7-941e-823694b449b9_l2p_l2_ctx /dev/hugepages/ftl_f853cac7-bcba-40a7-941e-823694b449b9_nvc_md /dev/hugepages/ftl_f853cac7-bcba-40a7-941e-823694b449b9_p2l_pool /dev/hugepages/ftl_f853cac7-bcba-40a7-941e-823694b449b9_sb /dev/hugepages/ftl_f853cac7-bcba-40a7-941e-823694b449b9_sb_shm /dev/hugepages/ftl_f853cac7-bcba-40a7-941e-823694b449b9_trim_bitmap /dev/hugepages/ftl_f853cac7-bcba-40a7-941e-823694b449b9_trim_log /dev/hugepages/ftl_f853cac7-bcba-40a7-941e-823694b449b9_trim_md /dev/hugepages/ftl_f853cac7-bcba-40a7-941e-823694b449b9_vmap 00:35:21.625 17:21:55 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:35:21.625 17:21:55 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:21.625 17:21:55 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:35:21.625 ************************************ 00:35:21.625 END TEST ftl_restore_fast 00:35:21.625 ************************************ 00:35:21.625 00:35:21.625 real 4m24.221s 00:35:21.625 user 4m12.965s 00:35:21.625 sys 0m11.201s 00:35:21.625 17:21:55 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:21.625 17:21:55 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:35:21.625 17:21:55 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:35:21.625 17:21:55 ftl -- ftl/ftl.sh@14 -- # killprocess 74947 00:35:21.625 17:21:55 ftl -- common/autotest_common.sh@954 -- # '[' -z 74947 ']' 00:35:21.625 17:21:55 ftl -- common/autotest_common.sh@958 -- # kill -0 74947 00:35:21.625 Process with pid 74947 is not found 00:35:21.625 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (74947) - No such process 00:35:21.625 17:21:55 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 74947 is not found' 00:35:21.625 17:21:55 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:35:21.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:21.625 17:21:55 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=86334 00:35:21.625 17:21:55 ftl -- ftl/ftl.sh@20 -- # waitforlisten 86334 00:35:21.625 17:21:55 ftl -- common/autotest_common.sh@835 -- # '[' -z 86334 ']' 00:35:21.625 17:21:55 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:21.625 17:21:55 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:21.625 17:21:55 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:21.625 17:21:55 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:21.625 17:21:55 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:21.625 17:21:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:21.625 [2024-12-05 17:21:55.957022] Starting SPDK v25.01-pre git sha1 8d3947977 / DPDK 24.03.0 initialization... 00:35:21.625 [2024-12-05 17:21:55.957168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86334 ] 00:35:21.885 [2024-12-05 17:21:56.119856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:21.885 [2024-12-05 17:21:56.239323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:22.829 17:21:56 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:22.829 17:21:56 ftl -- common/autotest_common.sh@868 -- # return 0 00:35:22.829 17:21:56 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:35:22.829 nvme0n1 00:35:22.829 17:21:57 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:35:22.829 17:21:57 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:22.829 17:21:57 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:35:23.090 17:21:57 ftl -- ftl/common.sh@28 -- # stores=76f6fa8d-f5c1-4aa3-8d44-ce8380410518 00:35:23.090 17:21:57 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:35:23.090 17:21:57 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 76f6fa8d-f5c1-4aa3-8d44-ce8380410518 00:35:23.351 17:21:57 ftl -- ftl/ftl.sh@23 -- # killprocess 86334 00:35:23.351 17:21:57 ftl -- common/autotest_common.sh@954 -- # '[' -z 86334 ']' 00:35:23.351 17:21:57 ftl -- common/autotest_common.sh@958 -- # kill -0 86334 00:35:23.351 17:21:57 ftl -- common/autotest_common.sh@959 -- # uname 00:35:23.351 17:21:57 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:23.351 17:21:57 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86334 00:35:23.351 killing process with pid 86334 00:35:23.351 17:21:57 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:23.351 17:21:57 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:23.351 17:21:57 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86334' 00:35:23.351 17:21:57 ftl -- common/autotest_common.sh@973 -- # kill 86334 00:35:23.351 17:21:57 ftl -- common/autotest_common.sh@978 -- # wait 86334 00:35:24.737 17:21:58 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:24.998 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:24.998 Waiting for block devices as requested 00:35:24.998 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:24.998 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:25.260 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:35:25.260 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:35:30.550 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:35:30.550 Remove shared memory files 00:35:30.550 17:22:04 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:35:30.550 17:22:04 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:30.550 17:22:04 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:35:30.550 17:22:04 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:35:30.550 17:22:04 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:35:30.550 17:22:04 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:30.550 17:22:04 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:35:30.550 ************************************ 00:35:30.550 END TEST ftl 00:35:30.550 ************************************ 00:35:30.550 00:35:30.550 real 17m9.896s 00:35:30.550 user 19m9.456s 00:35:30.551 sys 1m27.957s 00:35:30.551 17:22:04 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:30.551 17:22:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:30.551 17:22:04 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:30.551 17:22:04 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:30.551 17:22:04 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:30.551 17:22:04 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:30.551 17:22:04 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:30.551 17:22:04 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:30.551 17:22:04 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:30.551 17:22:04 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:30.551 17:22:04 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:30.551 17:22:04 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:30.551 17:22:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:30.551 17:22:04 -- common/autotest_common.sh@10 -- # set +x 00:35:30.551 17:22:04 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:30.551 17:22:04 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:30.551 17:22:04 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:30.551 17:22:04 -- common/autotest_common.sh@10 -- # set +x 00:35:31.935 INFO: APP EXITING 00:35:31.935 INFO: killing all VMs 00:35:31.935 INFO: killing vhost app 00:35:31.935 INFO: EXIT DONE 00:35:32.196 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:32.457 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:35:32.457 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:35:32.718 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:35:32.718 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:35:32.979 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:33.240 Cleaning 00:35:33.240 Removing: /var/run/dpdk/spdk0/config 00:35:33.240 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:33.240 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:33.240 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:33.240 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:33.501 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:33.501 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:33.501 Removing: /var/run/dpdk/spdk0 00:35:33.501 Removing: /var/run/dpdk/spdk_pid56893 00:35:33.501 Removing: /var/run/dpdk/spdk_pid57090 00:35:33.501 Removing: /var/run/dpdk/spdk_pid57297 00:35:33.501 Removing: /var/run/dpdk/spdk_pid57395 00:35:33.501 Removing: /var/run/dpdk/spdk_pid57429 00:35:33.501 Removing: /var/run/dpdk/spdk_pid57546 00:35:33.501 Removing: /var/run/dpdk/spdk_pid57564 00:35:33.501 Removing: /var/run/dpdk/spdk_pid57758 00:35:33.501 Removing: /var/run/dpdk/spdk_pid57851 00:35:33.501 Removing: /var/run/dpdk/spdk_pid57941 00:35:33.501 Removing: /var/run/dpdk/spdk_pid58047 00:35:33.501 Removing: /var/run/dpdk/spdk_pid58138 00:35:33.501 Removing: /var/run/dpdk/spdk_pid58178 00:35:33.501 Removing: /var/run/dpdk/spdk_pid58209 00:35:33.501 Removing: /var/run/dpdk/spdk_pid58285 00:35:33.501 Removing: /var/run/dpdk/spdk_pid58363 00:35:33.501 Removing: /var/run/dpdk/spdk_pid58794 00:35:33.501 Removing: /var/run/dpdk/spdk_pid58847 00:35:33.501 Removing: /var/run/dpdk/spdk_pid58904 00:35:33.501 Removing: /var/run/dpdk/spdk_pid58915 00:35:33.501 Removing: /var/run/dpdk/spdk_pid59016 00:35:33.501 Removing: /var/run/dpdk/spdk_pid59022 00:35:33.501 Removing: /var/run/dpdk/spdk_pid59119 00:35:33.501 Removing: /var/run/dpdk/spdk_pid59129 00:35:33.501 Removing: /var/run/dpdk/spdk_pid59188 00:35:33.501 Removing: /var/run/dpdk/spdk_pid59206 00:35:33.501 Removing: /var/run/dpdk/spdk_pid59259 00:35:33.501 Removing: /var/run/dpdk/spdk_pid59271 00:35:33.501 Removing: /var/run/dpdk/spdk_pid59431 00:35:33.501 Removing: /var/run/dpdk/spdk_pid59468 00:35:33.501 Removing: /var/run/dpdk/spdk_pid59551 00:35:33.501 Removing: /var/run/dpdk/spdk_pid59723 00:35:33.501 Removing: /var/run/dpdk/spdk_pid59807 00:35:33.501 Removing: /var/run/dpdk/spdk_pid59844 00:35:33.501 Removing: /var/run/dpdk/spdk_pid60277 00:35:33.501 Removing: /var/run/dpdk/spdk_pid60375 00:35:33.501 Removing: /var/run/dpdk/spdk_pid60484 00:35:33.501 Removing: /var/run/dpdk/spdk_pid60537 00:35:33.501 Removing: /var/run/dpdk/spdk_pid60558 00:35:33.501 Removing: /var/run/dpdk/spdk_pid60641 00:35:33.501 Removing: /var/run/dpdk/spdk_pid61267 00:35:33.501 Removing: /var/run/dpdk/spdk_pid61306 00:35:33.501 Removing: /var/run/dpdk/spdk_pid61775 00:35:33.501 Removing: /var/run/dpdk/spdk_pid61868 00:35:33.501 Removing: /var/run/dpdk/spdk_pid61977 00:35:33.501 Removing: /var/run/dpdk/spdk_pid62030 00:35:33.501 Removing: /var/run/dpdk/spdk_pid62061 00:35:33.501 Removing: /var/run/dpdk/spdk_pid62081 00:35:33.501 Removing: /var/run/dpdk/spdk_pid63941 00:35:33.501 Removing: /var/run/dpdk/spdk_pid64077 00:35:33.501 Removing: /var/run/dpdk/spdk_pid64082 00:35:33.501 Removing: /var/run/dpdk/spdk_pid64094 00:35:33.501 Removing: /var/run/dpdk/spdk_pid64143 00:35:33.501 Removing: /var/run/dpdk/spdk_pid64147 00:35:33.501 Removing: /var/run/dpdk/spdk_pid64159 00:35:33.501 Removing: /var/run/dpdk/spdk_pid64204 00:35:33.501 Removing: /var/run/dpdk/spdk_pid64208 00:35:33.501 Removing: /var/run/dpdk/spdk_pid64220 00:35:33.501 Removing: /var/run/dpdk/spdk_pid64259 00:35:33.501 Removing: /var/run/dpdk/spdk_pid64263 00:35:33.501 Removing: /var/run/dpdk/spdk_pid64275 00:35:33.501 Removing: /var/run/dpdk/spdk_pid65671 00:35:33.501 Removing: /var/run/dpdk/spdk_pid65768 00:35:33.501 Removing: /var/run/dpdk/spdk_pid67177 00:35:33.501 Removing: /var/run/dpdk/spdk_pid68938 00:35:33.501 Removing: /var/run/dpdk/spdk_pid69006 00:35:33.501 Removing: /var/run/dpdk/spdk_pid69088 00:35:33.501 Removing: /var/run/dpdk/spdk_pid69192 00:35:33.502 Removing: /var/run/dpdk/spdk_pid69284 00:35:33.502 Removing: /var/run/dpdk/spdk_pid69381 00:35:33.502 Removing: /var/run/dpdk/spdk_pid69455 00:35:33.502 Removing: /var/run/dpdk/spdk_pid69530 00:35:33.502 Removing: /var/run/dpdk/spdk_pid69634 00:35:33.502 Removing: /var/run/dpdk/spdk_pid69726 00:35:33.502 Removing: /var/run/dpdk/spdk_pid69827 00:35:33.502 Removing: /var/run/dpdk/spdk_pid69901 00:35:33.502 Removing: /var/run/dpdk/spdk_pid69971 00:35:33.502 Removing: /var/run/dpdk/spdk_pid70081 00:35:33.502 Removing: /var/run/dpdk/spdk_pid70172 00:35:33.502 Removing: /var/run/dpdk/spdk_pid70269 00:35:33.502 Removing: /var/run/dpdk/spdk_pid70343 00:35:33.502 Removing: /var/run/dpdk/spdk_pid70418 00:35:33.502 Removing: /var/run/dpdk/spdk_pid70528 00:35:33.502 Removing: /var/run/dpdk/spdk_pid70614 00:35:33.502 Removing: /var/run/dpdk/spdk_pid70710 00:35:33.502 Removing: /var/run/dpdk/spdk_pid70784 00:35:33.502 Removing: /var/run/dpdk/spdk_pid70858 00:35:33.502 Removing: /var/run/dpdk/spdk_pid70940 00:35:33.502 Removing: /var/run/dpdk/spdk_pid71009 00:35:33.502 Removing: /var/run/dpdk/spdk_pid71112 00:35:33.502 Removing: /var/run/dpdk/spdk_pid71203 00:35:33.502 Removing: /var/run/dpdk/spdk_pid71297 00:35:33.502 Removing: /var/run/dpdk/spdk_pid71367 00:35:33.502 Removing: /var/run/dpdk/spdk_pid71441 00:35:33.502 Removing: /var/run/dpdk/spdk_pid71521 00:35:33.502 Removing: /var/run/dpdk/spdk_pid71590 00:35:33.502 Removing: /var/run/dpdk/spdk_pid71693 00:35:33.502 Removing: /var/run/dpdk/spdk_pid71785 00:35:33.502 Removing: /var/run/dpdk/spdk_pid71934 00:35:33.502 Removing: /var/run/dpdk/spdk_pid72214 00:35:33.502 Removing: /var/run/dpdk/spdk_pid72249 00:35:33.502 Removing: /var/run/dpdk/spdk_pid72704 00:35:33.502 Removing: /var/run/dpdk/spdk_pid72892 00:35:33.502 Removing: /var/run/dpdk/spdk_pid72993 00:35:33.502 Removing: /var/run/dpdk/spdk_pid73103 00:35:33.763 Removing: /var/run/dpdk/spdk_pid73156 00:35:33.763 Removing: /var/run/dpdk/spdk_pid73176 00:35:33.763 Removing: /var/run/dpdk/spdk_pid73478 00:35:33.763 Removing: /var/run/dpdk/spdk_pid73538 00:35:33.763 Removing: /var/run/dpdk/spdk_pid73611 00:35:33.763 Removing: /var/run/dpdk/spdk_pid73996 00:35:33.763 Removing: /var/run/dpdk/spdk_pid74142 00:35:33.763 Removing: /var/run/dpdk/spdk_pid74947 00:35:33.763 Removing: /var/run/dpdk/spdk_pid75079 00:35:33.763 Removing: /var/run/dpdk/spdk_pid75249 00:35:33.763 Removing: /var/run/dpdk/spdk_pid75352 00:35:33.763 Removing: /var/run/dpdk/spdk_pid75662 00:35:33.763 Removing: /var/run/dpdk/spdk_pid75925 00:35:33.763 Removing: /var/run/dpdk/spdk_pid76286 00:35:33.763 Removing: /var/run/dpdk/spdk_pid76469 00:35:33.763 Removing: /var/run/dpdk/spdk_pid76662 00:35:33.763 Removing: /var/run/dpdk/spdk_pid76719 00:35:33.763 Removing: /var/run/dpdk/spdk_pid76923 00:35:33.763 Removing: /var/run/dpdk/spdk_pid76949 00:35:33.763 Removing: /var/run/dpdk/spdk_pid76996 00:35:33.763 Removing: /var/run/dpdk/spdk_pid77229 00:35:33.763 Removing: /var/run/dpdk/spdk_pid77465 00:35:33.763 Removing: /var/run/dpdk/spdk_pid78030 00:35:33.763 Removing: /var/run/dpdk/spdk_pid78694 00:35:33.763 Removing: /var/run/dpdk/spdk_pid79279 00:35:33.763 Removing: /var/run/dpdk/spdk_pid79983 00:35:33.763 Removing: /var/run/dpdk/spdk_pid80137 00:35:33.763 Removing: /var/run/dpdk/spdk_pid80223 00:35:33.763 Removing: /var/run/dpdk/spdk_pid80656 00:35:33.763 Removing: /var/run/dpdk/spdk_pid80719 00:35:33.763 Removing: /var/run/dpdk/spdk_pid81325 00:35:33.763 Removing: /var/run/dpdk/spdk_pid81800 00:35:33.763 Removing: /var/run/dpdk/spdk_pid82548 00:35:33.763 Removing: /var/run/dpdk/spdk_pid82681 00:35:33.763 Removing: /var/run/dpdk/spdk_pid82724 00:35:33.763 Removing: /var/run/dpdk/spdk_pid82782 00:35:33.763 Removing: /var/run/dpdk/spdk_pid82839 00:35:33.763 Removing: /var/run/dpdk/spdk_pid82893 00:35:33.763 Removing: /var/run/dpdk/spdk_pid83093 00:35:33.763 Removing: /var/run/dpdk/spdk_pid83192 00:35:33.763 Removing: /var/run/dpdk/spdk_pid83253 00:35:33.763 Removing: /var/run/dpdk/spdk_pid83313 00:35:33.763 Removing: /var/run/dpdk/spdk_pid83349 00:35:33.763 Removing: /var/run/dpdk/spdk_pid83416 00:35:33.763 Removing: /var/run/dpdk/spdk_pid83608 00:35:33.763 Removing: /var/run/dpdk/spdk_pid83829 00:35:33.763 Removing: /var/run/dpdk/spdk_pid84444 00:35:33.763 Removing: /var/run/dpdk/spdk_pid85060 00:35:33.763 Removing: /var/run/dpdk/spdk_pid85640 00:35:33.763 Removing: /var/run/dpdk/spdk_pid86334 00:35:33.763 Clean 00:35:33.763 17:22:08 -- common/autotest_common.sh@1453 -- # return 0 00:35:33.763 17:22:08 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:33.763 17:22:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:33.763 17:22:08 -- common/autotest_common.sh@10 -- # set +x 00:35:33.763 17:22:08 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:33.763 17:22:08 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:33.763 17:22:08 -- common/autotest_common.sh@10 -- # set +x 00:35:33.763 17:22:08 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:34.024 17:22:08 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:35:34.024 17:22:08 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:35:34.024 17:22:08 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:34.024 17:22:08 -- spdk/autotest.sh@398 -- # hostname 00:35:34.024 17:22:08 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:35:34.024 geninfo: WARNING: invalid characters removed from testname! 00:36:00.659 17:22:33 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:02.556 17:22:36 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:04.458 17:22:38 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:06.372 17:22:40 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:08.292 17:22:42 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:10.836 17:22:44 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:12.828 17:22:46 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:12.828 17:22:46 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:12.828 17:22:46 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:36:12.828 17:22:46 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:12.828 17:22:46 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:12.828 17:22:46 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:12.828 + [[ -n 5044 ]] 00:36:12.828 + sudo kill 5044 00:36:12.837 [Pipeline] } 00:36:12.851 [Pipeline] // timeout 00:36:12.855 [Pipeline] } 00:36:12.868 [Pipeline] // stage 00:36:12.872 [Pipeline] } 00:36:12.885 [Pipeline] // catchError 00:36:12.892 [Pipeline] stage 00:36:12.894 [Pipeline] { (Stop VM) 00:36:12.905 [Pipeline] sh 00:36:13.187 + vagrant halt 00:36:15.748 ==> default: Halting domain... 00:36:21.042 [Pipeline] sh 00:36:21.326 + vagrant destroy -f 00:36:23.895 ==> default: Removing domain... 00:36:24.855 [Pipeline] sh 00:36:25.140 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:36:25.151 [Pipeline] } 00:36:25.166 [Pipeline] // stage 00:36:25.171 [Pipeline] } 00:36:25.186 [Pipeline] // dir 00:36:25.191 [Pipeline] } 00:36:25.207 [Pipeline] // wrap 00:36:25.213 [Pipeline] } 00:36:25.226 [Pipeline] // catchError 00:36:25.236 [Pipeline] stage 00:36:25.238 [Pipeline] { (Epilogue) 00:36:25.251 [Pipeline] sh 00:36:25.537 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:30.836 [Pipeline] catchError 00:36:30.838 [Pipeline] { 00:36:30.849 [Pipeline] sh 00:36:31.132 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:31.132 Artifacts sizes are good 00:36:31.141 [Pipeline] } 00:36:31.155 [Pipeline] // catchError 00:36:31.165 [Pipeline] archiveArtifacts 00:36:31.171 Archiving artifacts 00:36:31.332 [Pipeline] cleanWs 00:36:31.354 [WS-CLEANUP] Deleting project workspace... 00:36:31.354 [WS-CLEANUP] Deferred wipeout is used... 00:36:31.373 [WS-CLEANUP] done 00:36:31.375 [Pipeline] } 00:36:31.391 [Pipeline] // stage 00:36:31.396 [Pipeline] } 00:36:31.415 [Pipeline] // node 00:36:31.435 [Pipeline] End of Pipeline 00:36:31.556 Finished: SUCCESS